France is a country that prides itself on its technological prowess and its ambition to become a leader in artificial intelligence (AI). But it is also a country that has a strong tradition of regulating the digital sector, often clashing with the U.S. and other global players over issues such as data privacy, taxation, and competition.
This tension between France’s AI hopes and its love of regulating tech is becoming more apparent as the European Union (EU) moves closer to adopting the world’s first comprehensive AI law, which aims to set ethical and legal standards for the use and development of AI across the bloc.
The proposed EU AI Act, unveiled in April 2023, would classify AI systems into four risk categories: unacceptable, high, limited, and minimal. Unacceptable AI systems, such as those that manipulate human behavior or exploit vulnerabilities, would be banned outright. High-risk AI systems, such as those used for facial recognition, biometric identification, or critical infrastructure, would be subject to strict requirements, such as human oversight, transparency, and quality assurance. Limited-risk AI systems, such as those that generate content or recommendations, would have to inform users that they are interacting with AI. Minimal-risk AI systems, such as those used for video games or spam filters, would be largely exempt from regulation.
The EU AI Act has been praised by some as a landmark initiative that would ensure that AI is trustworthy, human-centric, and respectful of fundamental rights. However, it has also faced criticism from various stakeholders, including industry groups, civil society organizations, and member states, who have raised concerns about the scope, clarity, and feasibility of the regulation.
France, in particular, has emerged as a vocal opponent of some aspects of the EU AI Act, especially the provisions on high-risk AI systems. France argues that the regulation is too restrictive and prescriptive, and that it would stifle innovation and competitiveness in the AI sector. France also claims that the regulation does not sufficiently take into account the diversity and specificity of AI applications, and that it would create legal uncertainty and administrative burden for AI providers and users.
France’s stance on the EU AI Act reflects its own vision and strategy for AI, which it has been developing since 2018, when President Emmanuel Macron launched a national AI plan with a budget of 1.5 billion euros. The plan aims to make France a global leader in AI research and innovation, while also ensuring that AI is ethical, inclusive, and beneficial for society. France has also been active in promoting international cooperation and dialogue on AI, such as through the Global Partnership on AI (GPAI), a multilateral initiative that brings together experts from governments, industry, academia, and civil society to advance the responsible and human-centric development and use of AI.
France’s position on the EU AI Act is not shared by all of its European partners, however. Germany and Italy, for instance, have expressed support for the regulation, and have called for a swift and ambitious adoption of the law. They argue that the EU AI Act would provide a common framework and a level playing field for the AI market in Europe, and that it would enhance trust and confidence in AI among consumers and citizens. They also stress that the regulation would not prevent innovation, but rather foster it, by creating incentives and opportunities for the development of ethical and sustainable AI solutions.
The debate over the EU AI Act is expected to continue in the coming months, as the European Parliament and the Council of the EU, which represent the views of the member states, will have to agree on a final version of the law. The outcome of this process will have significant implications for the future of AI in Europe and beyond, as the EU AI Act could set a global precedent and influence other countries and regions that are considering or developing their own AI regulations.

