top of page
Search

The European Union's AI Act: The Challenges of AI Governance

  • Writer: Guramrit DHILLON
    Guramrit DHILLON
  • May 3
  • 6 min read

Written by Viren Gemini Edited by Gursehaj Gosal


Viren Gemini is an undergraduate student in the Dual BA Program between Sciences Po and NUS, studying Economics and Politics. His academic interests lie in economic development policy, colonial history and human rights law.


The EU AI Act sets the precedent across the world as the first AI regulation framework of its kind. This comprehensive, landmark legislation aims to regulate the development of AI systems, their introduction into EU markets, as well as the implementation and use of these products. The broad mandate of this act, combined with the serious costs of non-compliance, up to 35 million euros or 7 percent of their global turnover (the higher of the two options), results in a long list of important stakeholders to this act. Entities ranging from AI developers like ChatGPT or DeepSeek, SMEs involved in machine learning models, firms that already/intend to integrate high-risk AI systems (banks, recruiters, etc.), to basic online marketplaces using AI assistants are impacted by this new act. 


The EU AI Act: Main Goals and Important Provisions


The EU’s vision is to work towards developing a global market for reliable and trustworthy AI systems. The Act is structured through a risk-based AI classification system, and as the risk associated with the AI system increases (based on the sensitivity of the sectors they are used in and other specified criteria), the compliance requirements become stricter. 


AI systems associated with certain dangers face outright bans, such as voice-activated devices for children with improper censorship capability and anti-privacy, biometric identification systems like public facial recognition. However, exceptions exist for law enforcement agencies.  High-risk AI systems are the primary focus of the Act, whilst regulations for limited-risk AI systems are largely centred around transparency. Low-risk AI systems are left largely unregulated.


General-Purpose AI models (GPAI) are models that can perform a wide range of tasks around several domain models, such as OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama need to ensure their training data complies with copyright law, namely the EU Directive on Copyright in the Digital Single Market 2019 [1].


Why AI Governance in the EU Poses Key Challenges


The new AI “Action” Summit held in Paris marked the shift in priority of AI governance from a focus on establishing cautionary safety guidelines (as the inaugural summit held in the UK was called AI Safety Summit) to helping economies deal with the practicalities of implementing AI development and use in an increasingly competitive market [2].  The challenge is balancing regulation with innovation in the race for AI dominance. 


The challenges faced in the enactment of the AI Act across the EU can be predicted by analyzing the consequences of past policies by the EU that impacted digital industries. The EU’s previous General Data Protection Regulation (GDPR), which focused on enhancing data privacy, faced strong criticism for imposing high compliance costs that led to increased barriers to entry [3]. Similarly, Article 43 of the EU AI Act titled “Conformity Assessments” makes reference to various annex points that detail the depth of the assessments involving risk management, data governance, transparency, human oversight, and cybersecurity during market introduction and operation too (in the case of major changes to the AI systems) [4]. For example, a paper by Geradin, Katsifis and Karanikioti analyses how these regulations ended up strengthening the market power of Google in the ad tech industry and reducing competition. It also points towards the uneven enforcement of the GDPR, as Google’s “questionable data-related practices” were forgiven by the Irish supervising authority, whilst smaller companies across the EU experienced stricter intervention by data protection authorities [5]. Many companies have blamed the failure of their EU branches on this regulation and, arguably, a resulting fall in innovation in digital startups [6]. Such enforcement complexities and negative consequences, in the context of increasing global AI competitiveness, are foreshadowing a growing anxiety of the EU falling behind in the AI race. 


The fact remains that the EU derives its law-making authority from its member states, which retain significant control over their domestic strategies, and some legal scholars (although not all) may term this a case of “shared sovereignty”. There is an imminent risk of tensions between the EU AI Act and national autonomy, as member states are already following AI strategies that differ from the goals of the Act and from each other. For instance, Spain, though lacking specific national AI legislation, has already established a supervisory agency with powers to inspect and sanction in line with the EU AI Act [7]. Meanwhile, countries like France remained skeptical of parts of the bill until early 2024, citing concerns that it could stifle innovation and hinder the growth of domestic startups such as Mistral AI and LightOn - potential competitors of OpenAI and Google [8]. Despite the three EU bodies reaching an agreement in December 2023, Paris continued to express its reservations about the proposed regulation [9]. Such hiccups may recur and possibly result in serious gridlocks or re-openings, as a delay has already been announced under the pretence of incorporating industry feedback and making certain aspects more SME-friendly, according to the Commission [10]. 


EU’s Main Human Rights Guarantor against Emerging AI Systems


The Act’s effectiveness in safeguarding human rights against the risks posed by AI remains a subject of debate. The safeguarding responsibility largely lies with the General Data Protection Regulation (GDPR). However, this creates a gap in redress, as challenging AI systems beyond the pretense of personal data is challenging due to a lack of provisions for human rights protection as a result of physical, social, or financial harm caused by AI systems. 


Organisations like European Digital Rights (EDRi) actively call for a publicly available Fundamental Rights Impact Assessment (FRIA) for all high-risk AI systems, a comprehensive complaint system to record and take action against human rights violations by AI systems and a “full ban” on dangerous AI applications such as remote biometric identification (RBI), crime prediction and migration by law enforcement. The Act also fails to provide human rights protection from EU-made AI systems to people from outside the EU, as banned AI systems can still be exported. 


In April 2024, multiple human rights organizations (including Amnesty, European Disability Forum, ECNL, Algorithm Watch and others) published a joint analysis on the shortcomings of the EU AI Act in ensuring the protection of equality, non-discrimination and broader human rights [12]. Although the FRIA has been included, it fails to meet the guidelines of major watchdog organizations because of a lack of steps to ensure the listed risks do not violate rights, a lack of civil society involvement in the assessment, limited accountability and major transparency exceptions for public and law enforcement agencies. 


Countries around the world are working to balance the protection of rights with space for innovation and growth. The EU AI Act will come into effect in the context of relatively relaxed and flexible AI regulation frameworks in the US and China, the EU’s two biggest competitors. As opposed to FRIAs and stringent conformity assessments, the approach of regulatory sandboxes and adaptive compliance paves the way for AI dominance at the risk of insufficient oversight and privacy breaches. The process of AI regulation in the EU is marked by delays, changing global political dynamics against AI regulation and active calls for human rights protection. Time will reveal if the final version of the unprecedented act will follow the legacy of the Brussels Effect.


Bibliography


[1] EU Artificial Intelligence Act. “High-Level Summary of the AI Act.” EU Artificial Intelligence Act, 27 Feb. 2024, artificialintelligenceact.eu/high-level-summary/.


[2] “Pledge for a Trustworthy AI in the World of Work.” Elysee.fr, 11 Feb. 2025, www.elysee.fr/emmanuel-macron/2025/02/11/pledge-for-a-trustworthy-ai-in-the-world-of-work. ; “Remarks by the Vice President at the Artificial Intelligence Action Summit in Paris, France | the American Presidency Project.” Ucsb.edu, 2025, www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.


[3] Hofmann, Herwig C. H., and Lisette Mustert. The Future of GDPR Enforcement. 28 Nov. 2024, verfassungsblog.de/the-future-of-gdpr-enforcement, https://doi.org/10.59704/8c530ef633b462d7.


[4] “Article 43: Conformity Assessment | EU Artificial Intelligence Act.” 2014. Artificialintelligenceact.eu. 2014. https://artificialintelligenceact.eu/article/43/.


[5] Geradin, Damien, et al. “GDPR Myopia: How a Well-Intended Regulation Ended up Favoring Google in Ad Tech.” SSRN Electronic Journal, 2020, https://doi.org/10.2139/ssrn.3598130.


[6] Janßen, Rebecca, et al. “GDPR and the Lost Generation of Innovative Apps.” National Bureau of Economic Research, May 2022, https://doi.org/10.3386/w30028


[7] “AI Watch: Global Regulatory Tracker - Spain | White & Case LLP.” www.whitecase.com, 13 May 2024, www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-spain


[8] Martin, Nicholas, et al. “How Data Protection Regulation Affects Startup Innovation.” Information Systems Frontiers, vol. 21, no. 6, 18 Nov. 2019, pp. 1307–1324. springer, link.springer.com/article/10.1007/s10796-019-09974-2, https://doi.org/10.1007/s10796-019-09974-2.



[10] Kroet, Cynthia. “Drafting of AI Code of Practice Faces at Least One Month Delay.” Euronews, Euronews.com, 20 Feb. 2025, www.euronews.com/next/2025/02/20/drafting-of-ai-code-of-practice-faces-at-least-one-month-delay. ; Moreau, Claudie. “Commission Line Confused on Potential AI Act Re-Opening.” Euractiv, EURACTIV, 19 Feb. 2025, www.euractiv.com/section/tech/news/commission-line-confused-on-potential-ai-act-re-opening/.


[11] “Protect People’s Rights in the AI Act - European Digital Rights (EDRi).” 2022. European Digital Rights (EDRi). 2022. https://edri.org/our-work/civil-society-urges-european-parliament-to-protect-peoples-rights-in-the-ai-act/.


[12]  Jakubowska, Ella, Kave Noori, Mher Hakobyan, IwańskaKarolina, Kilian Vieth-Ditlmann, Nikolett Aszodi, Judith Membrives Llorens, et al. 2024. “EU’s AI Act Fails to Set Gold Standard for Human Rights.” https://edri.org/wp-content/uploads/2024/04/EUs-AI-Act-fails-to-set-gold-standard-for-human-rights.pdf.

 
 

Recent Posts

See All

 

© 2025 by Sciences Po Le Havre Campus Undergraduate Law Review.  

 

Disclaimer:

The opinions expressed herein are solely those of the respective authors and are not endorsed by the Sciences Po Undergraduate Law Review. The Sciences Po Undergraduate Law Review is a student-run, non-partisan publication and does not reflect the views of Sciences Po Le Havre or its administration. The views expressed do not represent those of the editorial board as a whole. This publication does not purport to be a graduate-level law review.

bottom of page