The use of artificial intelligence has increased in recent months, bringing benefits not only to technology but also to other areas of life, such as medicine, improvements in manufacturing, more sustainable energy at a better price, and safer and cleaner transportation. However, despite the multiple benefits it has brought and will continue to bring in many areas, it is not a topic that can be taken lightly, and parameters must be established. Therefore, the EU is beginning to establish rules for the use of AIs.
Not surprisingly, a group of experts, including executives and engineers from major AI companies such as Demis Hassabis (Google DeepMind), Dario Amodei (Anthropic), and Sam Altman (from OpenAI), expressed their concern and stated that “artificial intelligence (AI) poses an ‘existential risk’ to humanity.” . The Center for AI Safety, a nonprofit organization, stated in their communication that “Mitigating the existential risk of AI to humanity should be a global priority alongside other socially scaled risks such as pandemics and nuclear war.”
This is why the European Union is taking its first steps and is a pioneer in the world in establishing rules for the use of AI. Since April 2021, the first regulatory framework was presented, proposing the evaluation and classification of AI systems according to the risk they may pose to users. On June 16, 2023, the first risk classifications for AI systems were announced, along with the implementation of various levels of regulation that will be applied to them.
Measures that will shape the framework for the use of AI systems in the EU
These measures should prioritize that the systems are transparent, non-discriminatory, environmentally friendly, and above all, safe. Furthermore, it will be established that they should not operate autonomously but should always be supervised by real people, thus avoiding possible irreparable harm and ruling out self-regulation or automation.
The regulations establish different levels of risk depending on the use given to AI systems, which will be mandatory for both providers and users, depending on the level of risk they represent. These levels range from unacceptable risk, high risk, generative AI, to limited risk.
Regarding the Unacceptable risk:
These are considered threats to individuals and will be strictly prohibited. To be classified as unacceptable, they would involve or make use of:
- Cognitive manipulation of behavior: both of individuals and children. An example of this would be voice-activated toys that encourage dangerous behavior in children.
- Social scoring: this refers to classifying individuals based on their socioeconomic status, behavior, or personal characteristics.
- Real-time and remote biometric identification systems, such as facial recognition. (In the case of facial recognition systems, there are some exceptions. For example, remote biometric identification systems will be allowed for the pursuit of serious crimes only with prior judicial approval).
For High-risk systems:
Those that negatively affect fundamental rights and safety will fall into this classification, and they will be divided into two categories.
- AI systems used in products subject to EU legislation on product safety: toys, aviation, cars, medical devices, and elevators.
- AI systems belonging to eight specific areas that will need to be registered in an EU database:
-
- Biometric identification and categorization of individuals.
- Education and vocational training.
- Management and operation of critical infrastructure.
- Access to and enjoyment of essential private services and public services and benefits.
- Law enforcement.
- Employment, management of workers, and access to self-employment.
- Assistance in legal interpretation and enforcement.
- Migration management, asylum, and border control.
All high-risk classified systems will be assessed before being marketed and throughout their lifecycle.
For Generative AI systems:
In this classification, generative AIs like ChatGPT will have to meet certain transparency requirements.
- They must disclose that the content created has been generated by AI.
- Design the model to prevent it from generating illegal content.
- Publish summaries of copyrighted data used for training.
Limited-risk AI systems:
These systems must meet certain basic standards of clarity that allow users to make informed decisions. In other words, after using them, the user should have the freedom to decide whether to continue using them or not, and they should be aware that they are interacting with AI. This means that those systems that generate or manipulate image, audio, or video content should make it known.
OpenAI’s stance on withdrawing ChatGPT from the European Union.
When concerns were raised about the excessive use of ChatGPT in Europe and the European Union’s intentions to create measures to control it, Sam Altman had indicated that they would remove ChatGPT from Europe if the restrictions were too strict and they believed they could not comply with them. However, in a tweet published in May 2023, he stated that they have “no plans” to remove ChatGPT from Europe.
What steps will follow?
Members of the European Parliament have adopted their position on the AI law. Discussions will continue on the final form it will take in the Council, together with all the countries representing the EU. . Their goal is to reach an agreement by the end of 2023.