Microsoft, one of the world’s leading technology companies, is considering expanding its artificial intelligence (AI) business in China, where the government is actively promoting the development and adoption of AI. However, Microsoft’s plans may face challenges in the European Union (EU), where the bloc is working on regulations to ensure the ethical and safe use of AI.
China’s Approach to AI Regulation: State Control and Economic Dynamism
China sees AI as a strategic technology that can help it achieve its economic and geopolitical goals, and has been investing heavily in AI research and innovation. China accounted for nearly one-fifth of global private investment funding in 2021, attracting $17 billion for AI start-ups. China has also been building its bureaucratic toolkits to quickly and iteratively propose new AI governance laws, allowing it to adjust regulatory guidance as new use cases of the technology emerge.
However, China’s approach to AI also raises concerns about privacy and civil liberties, as the government has been known to use AI for surveillance, censorship, and social control purposes. On Thursday, July 13th 2023, the Cyberspace Administration of China (CAC) released their “Interim Measures for the Management of Generative Artificial Intelligence Services”, which lay out the rules to regulate those who provide generative AI capabilities to the public in China. Generative AI is a type of AI that can create new content, such as text, images, or audio, based on input data.
According to the CAC, generative AI providers must uphold the integrity of state power, refrain from inciting secession, safeguard national unity, preserve economic and social order, and ensure the development of products aligning with the country’s socialist values. Generative AI systems that exploit vulnerabilities of certain groups of natural persons, or that generate incitement against the State, are prohibited.
The use of biometric identification systems in publicly accessible spaces is also banned, unless there is a specific law or court order explicitly allowing the use of such systems (e.g. for the prosecution of criminal offences). Furthermore, firms must require a license to provide generative AI services to the public and submit a security assessment if public opinion attributes or social mobilization capabilities are used in the model.
EU’s Approach to AI Regulation: Precaution and Protection
The EU, on the other hand, is taking a more precautionary and protective approach to AI regulation, aiming to ensure the ethical and safe use of AI in line with the bloc’s values and fundamental rights. The EU has proposed a draft Artificial Intelligence Act (AI Act), which is expected to be agreed by the end of 2023. The AI Act focuses on banning some uses and allowing others, while laying out due diligence for AI firms to follow.
The AI Act defines AI as “a software system that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. The techniques and approaches listed in Annex I include machine learning, logic and knowledge representation, statistical approaches, and search and optimization methods.
The AI Act sets out four categories of AI systems based on their level of risk: prohibited, high-risk, limited-risk, and minimal-risk. Prohibited AI systems include those that manipulate human behavior, opinions, or decisions, or that exploit vulnerabilities of specific groups, in a manner that causes or is likely to cause physical or psychological harm. Also prohibited are AI systems used for social scoring by public authorities, and AI systems that enable ‘real-time’ remote biometric identification in publicly accessible spaces for law enforcement purposes, unless certain exceptions apply. High-risk AI systems include those used in various areas sensitive to fundamental rights, such as critical infrastructure, education and vocational training, employment, essential public and private services, law enforcement, migration, and biometric identification.
High-risk AI systems are subject to strict obligations, such as conformity assessment, risk management, data quality, transparency, human oversight, and accuracy. Limited-risk AI systems include those that use subliminal techniques or emotion recognition to influence users, or those that generate or manipulate image, audio, or video content. Limited-risk AI systems are subject to transparency obligations, such as informing users of the AI involvement, the use of subliminal techniques or emotion recognition, and the artificial nature of the content. Minimal-risk AI systems include those that pose no or negligible risk to fundamental rights, such as AI-enabled video games or spam filters. Minimal-risk AI systems are subject to voluntary codes of conduct and best practices.
Microsoft’s AI Ambitions in China and EU: Opportunities and Challenges
Microsoft, which has a global presence and a diverse portfolio of AI products and services, is keen to tap into the opportunities and overcome the challenges posed by the different regulatory approaches in China and the EU. Microsoft has been operating in China since 1992, and has established several research and development centers, cloud computing platforms, and AI initiatives in the country. Microsoft has also collaborated with Chinese companies, universities, and government agencies on various AI projects, such as facial recognition, natural language processing, and cloud computing.
However, Microsoft’s AI ambitions in China may face some obstacles, such as the CAC’s new rules on generative AI, which could limit the scope and scale of Microsoft’s AI offerings in the country. For instance, Microsoft’s Azure Cognitive Services, which provide various generative AI capabilities, such as text analytics, speech synthesis, and computer vision, may have to comply with the CAC’s licensing and security assessment requirements, as well as the ethical and social norms imposed by the Chinese government. Moreover, Microsoft may have to balance its business interests in China with its corporate values and social responsibilities, especially in light of the human rights concerns raised by some of its AI applications in China, such as the use of facial recognition for surveillance and social control.
On the other hand, Microsoft’s AI ambitions in the EU may benefit from the bloc’s harmonized and comprehensive framework for AI regulation, which could provide legal certainty and consumer trust for Microsoft’s AI products and services. Microsoft has expressed its support for the EU’s draft AI Act, stating that it “welcomes the EU’s balanced and proportionate approach towards ensuring that AI systems are trustworthy and aligned with the EU’s values and fundamental rights”. Microsoft has also advocated for a human-centric and responsible approach to AI development and deployment, and has adopted its own AI principles and practices, such as fairness, reliability, privacy, security, inclusiveness, transparency, and accountability.
However, Microsoft’s AI ambitions in the EU may also encounter some difficulties, such as the complexity and cost of complying with the EU’s stringent and detailed requirements for high-risk AI systems, which could affect Microsoft’s competitiveness and innovation in the EU market.
For example, Microsoft’s Azure Cognitive Services, which provide various high-risk AI capabilities, such as face detection, content moderation, and anomaly detection, may have to undergo conformity assessment, risk management, data quality, transparency, human oversight, and accuracy obligations, which could entail significant time and resources for Microsoft. Furthermore, Microsoft may have to adapt its AI products and services to the EU’s specific legal and cultural context, and to the preferences and expectations of the EU’s consumers and stakeholders, which may differ from those in other regions, such as China or the US.
Microsoft’s AI ambitions in China and the EU reflect the company’s vision to “empower every person and every organization on the planet to achieve more” with AI. However, Microsoft’s plans may also face challenges in navigating the different regulatory approaches and environments in China and the EU, which have implications for the ethical and social impact of AI. Microsoft will have to balance its business goals with its corporate values and social responsibilities, and to tailor its AI products and services to the needs and demands of its customers and partners in China and the EU.