top of page

Every country should implement the EU's Artificial Intelligence Act to prevent the misuse of AI for criminal activities.

-

Governments worldwide are wrestling with how to manage the rapid expansion of artificial intelligence (AI), a technology that promises to enhance national economies and simplify routine tasks, but also presents significant risks such as AI-enabled crime, misinformation, heightened surveillance, and increased discrimination. The Council of Europe, representing 46 member states, has made history by adopting the first international treaty requiring AI to adhere to human rights, democracy, and the rule of law.


The European Union (EU) has taken a pioneering approach to addressing these risks with the recent enactment of its Artificial Intelligence Act, the first law of its kind globally aimed at comprehensively managing AI risks. This new legislation, which came into effect on August 1, establishes requirements based on the level of risk posed by different AI systems. Higher-risk AI applications, particularly those impacting health, safety, or human rights, are subject to stricter regulations. For example, AI systems that use subliminal methods to influence decisions or facilitate unrestricted facial recognition by law enforcement are prohibited. Other high-risk systems, such as those used in education, healthcare, and government, must meet rigorous standards for data quality, accuracy, and cybersecurity, and include human oversight.


Even lower-risk AI systems, like chatbots, are required to adhere to transparency rules, including informing users that they are interacting with AI rather than a human. AI-generated content must clearly state that it was created by AI. Designated authorities within the EU and member states will oversee compliance and enforce penalties for violations. This framework serves as a model for other countries, such as Australia, as they seek to ensure AI is both safe and beneficial for all.


1 view0 comments

Comments


bottom of page