What Does the Enactment of the EU AI Act Mean?
-
On March 13, 2024, the European Parliament officially approved the EU AI Act with 523 votes in favor, 46 against, and 49 abstentions, marking a critical step toward completing the legislative process.
As the world's first comprehensive law on AI governance, the EU AI Act establishes the first globally binding and comprehensive framework for trustworthy AI. This positions the EU at the forefront of legal regulation for this strategic technology and its applications, while also setting an anchor point for the future direction of AI rules and regulatory ecosystems in the West.
The legislative hallmark of the EU AI Act lies in its construction of a specialized legal governance framework tailored to emerging AI technologies and application scenarios, grounded in Europe's inherent historical values. Specifically, it categorizes corresponding operational rules and compliance requirements based on the varying risk levels and impact degrees of different AI systems. Key institutional focuses include: First, it clarifies prohibited AI applications. The AI Act bans certain AI applications that threaten citizens' rights, including biometric classification systems based on sensitive characteristics, indiscriminate facial image scraping from the internet or CCTV footage to create facial recognition databases. Emotion recognition in workplaces and schools, social scoring, specific predictive policing, and AI applications that manipulate human behavior or exploit people's vulnerabilities will also be prohibited.
Second, it establishes compliance obligations for high-risk AI. The AI Act addresses "high-risk AI systems" with significant potential harm to health, safety, fundamental rights, environment, democracy and rule of law, including systems in critical infrastructure, education and vocational training, employment, essential private and public services, law enforcement, migration and border management, as well as justice and democracy. It sets comprehensive compliance requirements, mandating risk assessment and mitigation, usage log preservation, transparency and accuracy, and ensuring human oversight. Citizens have the right to submit complaints about AI systems and receive necessary explanations for decisions affecting their rights made by high-risk AI systems. Thirdly, it defines transparency requirements for general-purpose artificial intelligence. As the most important strategic scenario currently, the Artificial Intelligence Act specifically targets General-Purpose AI (GPAI) systems and their underlying GPAI models, explicitly emphasizing that they must meet certain transparency requirements. These include compliance with EU copyright law and the publication of detailed summaries of training content. More powerful GPAI models that may pose systemic risks need to fulfill additional requirements, including conducting model evaluations, safety assessments, systemic risk mitigation, and incident reporting. Furthermore, in cases of artificially generated or manipulated images, audio, or video content—known as "deep synthesis"—the responsible parties are legally required to clearly label such content as artificial or manipulated.
Overall, the core legislative purpose of the EU Artificial Intelligence Act is to protect fundamental rights, democracy, the rule of law, and environmental sustainability from the impact of "high-risk artificial intelligence," while promoting innovation and development. It aims to establish Europe as a globally trusted hub for AI, strengthen the use, investment, and innovation of AI across the EU, and position Europe as a world leader in AI research, application, and legal governance. It should be noted that the formulation of the Artificial Intelligence Act originated from the EU's 2018 Artificial Intelligence Strategy, and the entire process has consistently attracted high attention from global stakeholders. Its introduction and implementation will continue to produce three major ecological impacts:
First, it influences the concepts and methods of artificial intelligence regulation. On one hand, the Artificial Intelligence Act emphasizes continuous promotion and strengthening of cooperation among EU stakeholders in AI technology, with the important goal of enhancing its technological competitiveness while adhering to EU values. On the other hand, the institutional design of the Act is based on a classification and grading approach for various AI systems, focusing on strengthening risk regulation related to specific application scenarios. Through the establishment of an AI Office and the introduction of toolkits such as model evaluation, safety assessment, risk mitigation, and transparency requirements, it systematically constructs a governance framework for AI safety. This "risk-oriented" regulatory approach extends the core methodology of the EU's new generation of digital legislation, including the General Data Protection Regulation (GDPR), the Regulation on the Free Flow of Non-Personal Data, the Data Governance Act, the Data Act, the Digital Services Act, and the Digital Markets Act, achieving innovative coverage in the field of artificial intelligence. Secondly, Impacting the International Governance Ecosystem of Artificial Intelligence. As mentioned earlier, a key feature of the EU's AI Act is its foundation in traditional European values such as human rights, democracy, freedom, and the rule of law. Its implementation relies heavily on the economic scale of the EU market and its relative soft power. Considering the EU's past advocacy and practices regarding technological and digital sovereignty, as well as its various actions in the field of international AI governance—particularly its coordination with countries like the U.S., U.K., Japan, and Canada in the drafting processes of the Cybercrime Convention and AI Resolution under the UN framework—it is evident that the "European model" will emerge as a prominent governance framework. This model is likely to become a significant reference point in the international AI governance ecosystem, exerting a notable "regulatory spillover effect" on the value choices and enforcement methods of other nations' policies and regulations. Furthermore, through various international agreements and trade deals, the EU will continue to expand its influence globally. Third, Impact on AI Industry Applications and Compliance. The AI Act establishes a comprehensive framework of operational rules and compliance obligations tailored to different AI technologies, application scenarios, and their associated risk levels. In practice, these will be progressively implemented through the AI Office as the central enforcement body, demonstrating sufficient intensity, breadth, and speed. Upon full implementation, the Act will immediately influence the R&D direction and investment structures of AI enterprises, while also affecting market strategies, business operations, and compliance approaches across all AI ecosystem layers—from infrastructure, cloud computing, and data centers to large models. This will profoundly reshape the future landscape of AI and digital industries. Continuous and systematic analysis of the Act's implementation progress and enforcement focus is essential. Timely risk assessment for related industries, thorough review of strategic business portfolios and digital asset structures, and the design of economically viable, agile, and feasible compliance solutions aligned with AI's technical logic must be prioritized.