Go deeper - join us!
European Union lawmakers gave their official approval to the bloc’s groundbreaking AI regulation, clearing the path for the EU to restrict specific applications of the technology and mandate transparency from providers. With a majority vote on Wednesday, 523 members of the European Parliament voted to formally endorse the Artificial Intelligence Act (AI Act), marking the beginning of efforts to enforce and implement the regulation.
What does all this mean? Let’s break it down.
Since its initial proposal in 2021, the AI Act has been the subject of intense debate, particularly regarding its stringent regulations. For instance, a proposed complete prohibition on biometric systems for mass public surveillance faced last-minute compromises. While Wednesday’s announcement indicates that the law is nearing its final approval, it will still require several years for certain regulations to be fully enforced.
The “legal language” of the text is still pending final approval, expected either through a separate announcement or a vote during the plenary session on April 10th or 11th. Following this, the AI Act will officially take effect 20 days after its publication in the Official Journal, anticipated to occur in May or June of this year. The provisions will be implemented gradually: countries will have six months to prohibit banned AI systems, 12 months to enforce regulations concerning “general-purpose AI systems” such as chatbots, and up to 36 months for AI systems categorized as “high risk” under the law.
Among the prohibited AI systems are those involved in activities such as social scoring, emotion recognition within workplaces or educational institutions, or systems designed to manipulate behavior or exploit user vulnerabilities. Meanwhile, examples of “high-risk” AI systems encompass applications in critical infrastructure, education, and vocational training, specific law enforcement systems, and those capable of influencing democratic processes such as elections.
Paul Barrett, Deputy Director of the NYU Stern Center for Business and Human Rights claims the act will take affect as early as 2025. Barrett suggests that leading AI companies such as OpenAI, Microsoft, Google, and Meta are expected to persist in their pursuit of dominance, especially during regulatory uncertainties across the country.
The United States has struggled to establish comprehensive AI regulation, despite being the base for major industry players like Meta, Amazon, Adobe, Google, Nvidia, and OpenAI. The most significant action to date has been an executive order from the Biden administration, instructing government agencies to develop safety standards and expand upon voluntary agreements signed by prominent AI companies. Senate bills introduced have mainly focused on issues such as deepfakes and watermarking, while closed-door AI forums led by Senator Chuck Schumer (D-NY) have provided minimal clarity regarding the government’s approach to regulating the technology.
The legislation does not apply retroactively to regulate existing apps or models. However, upcoming iterations of platforms like OpenAI’s GPT, Meta’s Llama, or Google’s Gemini will be required to adhere to the transparency standards outlined by the EU. While the effects may not be immediately drastic, this underscores the EU’s stance on AI and its commitment to transparency and accountability in the field.




