EU AI Act: what it is and what it requires
Expected to come into force in 2024, it will be the world’s first Artificial Intelligence regulation
The regulation of artificial intelligence applications is a fundamental step towards a responsible and safe use of this powerful technology.
In questo articolo scoprirai:
In recent years, Artificial Intelligence (AI) has taken a central role in the digital transformation of our world, influencing a wide range of sectors, from industry to healthcare, from mobility to finance. Inevitably, this rapid evolution has made regulation of the technology, which by its nature (i.e. the ability to make decisions and self-learn) carries important and sensitive ethical implications, increasingly necessary and urgent.
Since April 2021, the EU has therefore been working on the so-called AI Act, a first regulatory framework on Artificial Intelligence, the final approval of which is expected at the end of 2023 to come into force between 2024 and 2025.
What the AI Act is and why it is important
The aim of the AI Act is to ensure that AI systems used within the European Union are fully in line with EU rights and values, guaranteeing human control, security, privacy, transparency, non-discrimination, and social and environmental well-being.
AI applications, in fact, are achieving ever-increasing real-time analysis capabilities and can ‘decide’ which actions to take based on the available data. It is precisely this ability to decide and learn that makes the ethical evaluations we mentioned at the beginning of the article necessary. If an AI application makes a wrong decision or one that unfairly privileges or harms a human being, whose responsibility will it be? This is why, with the spread of this technology, the EU has decided to work on what will be the world’s first AI regulation.
A very complex task: the rules governing the use of technology must protect the privacy and safety of citizens, without limiting the possibilities of experimenting with applications.
What the AI Act provides for
The AI Regulation establishes 4 levels of risk according to which to categorize AI applications, which will consequently have to be subjected to different levels of monitoring.
Applications using subliminal techniques or social scoring systems used by public authorities are strictly prohibited. Real-time remote biometric identification systems used by law enforcement in publicly accessible spaces are also banned.
These include applications related to transportation, education, employment and welfare, among others. Before placing a high-risk AI system on the market or in service in the EU, companies must conduct a preliminary “conformity assessment” and meet a long list of requirements to ensure the security of the system. As a pragmatic measure, the regulation also requires the European Commission to create and maintain a public access database where vendors will be required to provide information on their high-risk AI systems, ensuring transparency for all stakeholders.
In the meantime, AI applications in our country continue to spread: the Artificial Intelligence Observatory of the Politecnico di Milano, in its latest report (February 2023) calculated that the artificial intelligence market in Italy has reached a value of 500 million euros, with a +32% compared to 2021, and that Italian companies that have already started an AI project are 61% of large enterprises, a percentage that stops at 15% in the SME sphere for which, however, an increase is expected in the next 24 months.
Regulation of AI applications is a key step toward responsible and safe use of this powerful technology, to shape a future in which AI is a reliable ally that respects human values, helping to create a safer and more ethical digital environment for all. The key lies in the balance between technological innovation and the protection of human rights, a goal that requires sustained commitment from all stakeholders.Limited risk
Refers to AI systems that meet specific transparency requirements. For example, an individual interacting with a chatbot must be informed that he or she is interacting with a machine so that he or she can decide whether to proceed (or request to speak with a human).
These applications are already widely deployed and make up most of the AI systems we interact with today. Examples include spam filters, AI-enabled video games, and inventory management systems.
In addition, the AI Act states that the primary responsibility will rest with the “suppliers” of AI systems; however, certain responsibilities will also be assigned to distributors, importers, users and other third parties, affecting the entire ecosystem.
L’AI in Italia, alcuni dati
Nel frattempo le applicazioni AI nel nostro paese continuano a diffondersi: l’Osservatorio Artificial Intelligence del Politecnico di Milano, nel suo ultimo report (febbraio 2023) ha calcolato che il mercato dell’artificial intelligence in Italia ha raggiunto un valore di 500 milioni di euro, con un +32% rispetto al 2021 e che le imprese italiane che hanno già avviato un progetto AI sono il 61% delle grandi imprese, percentuale che si ferma al 15% in ambito PMI per cui si prevede però un aumento nei prossimi 24 mesi.
La regolamentazione delle applicazioni di intelligenza artificiale rappresenta un passo fondamentale verso un utilizzo responsabile e sicuro di questa potente tecnologia, per plasmare un futuro in cui l’intelligenza artificiale sia un alleato affidabile e rispettoso dei valori umani, contribuendo a creare un ambiente digitale più sicuro e etico per tutti. La chiave sta nell’equilibrio tra l’innovazione tecnologica e la protezione dei diritti umani, un obiettivo che richiede un impegno costante da parte di tutti gli attori coinvolti.