The UK prime minister announced a global summit on AI safety, but failed to address the urgent need for regulation and oversight of the technology.
Rishi Sunak, the UK prime minister, has recently shifted his tone on artificial intelligence (AI), from being overwhelmingly optimistic about its potential benefits, to acknowledging its “existential risks”. He has also announced that the UK will host a global summit on AI safety in the autumn, as well as establishing a UK AI safety institute to test new types of AI for a range of hazards.
However, these initiatives are not enough to address the serious challenges posed by AI, especially in the absence of any concrete regulatory measures or binding commitments from the government. Sunak’s AI plan is largely symbolic and vague, leaving the door open for big tech companies to exploit the technology for their own interests, without regard for the public good or the ethical implications.
AI poses multiple threats to society, democracy and human rights.
AI is not a neutral or benign technology. It can be used for good or evil, depending on who controls it and for what purpose. AI can enable innovation and social progress, but it can also facilitate oppression, discrimination and violence.
Some of the threats that AI poses include:
- Generating misinformation and disinformation, which can undermine trust in institutions, spread false or harmful narratives, and manipulate public opinion and behaviour.
- Enabling mass surveillance and data exploitation, which can violate privacy, erode civil liberties, and enable authoritarian regimes to monitor and control their citizens.
- Automating warfare and weaponisation, which can increase the risk of conflict, escalation and civilian casualties, as well as creating new ethical dilemmas and accountability gaps.
- Disrupting labour markets and economic systems, which can lead to unemployment, inequality and social unrest, as well as challenging existing norms and values.
- Exceeding human intelligence and control, which can result in unpredictable and potentially catastrophic outcomes, as well as posing existential questions about the future of humanity.
The UK government has failed to take meaningful action to regulate and govern AI.
Despite acknowledging the risks of AI, the UK government has not taken any significant steps to regulate or govern the technology. Instead, it has relied on voluntary codes of conduct, self-regulation by the industry, and non-binding principles and guidelines.
This approach is insufficient and ineffective, as it does not provide any legal or enforceable mechanisms to ensure that AI is developed and deployed in a safe, ethical and responsible manner. It also does not address the power imbalance between the public and private sectors, as well as the global and national dimensions of AI governance.
The UK government has also been inconsistent and contradictory in its stance on AI, as it has simultaneously promoted and invested in the technology, while neglecting or undermining its social and environmental impacts. For example, the government has supported the development of facial recognition technology, despite its proven flaws and biases, and has opposed the EU’s proposed ban on high-risk AI applications, such as mass surveillance and social scoring.
The UK needs a comprehensive and coherent strategy for AI governance.
The UK has an opportunity and a responsibility to play a leading role in shaping the global agenda on AI governance, as well as ensuring that its own domestic policies and practices are aligned with the highest standards of safety, ethics and human rights.
To do this, the UK needs a comprehensive and coherent strategy for AI governance, which should include:
- Establishing a clear and consistent legal and regulatory framework for AI, which defines the scope, objectives and principles of AI governance, as well as the roles and responsibilities of different actors and stakeholders.
- Creating a robust and independent oversight and accountability mechanism for AI, which monitors, audits and evaluates the development and deployment of AI systems, as well as providing redress and remedy for any harms or violations caused by AI.
- Developing a participatory and inclusive process for AI governance, which engages and empowers the public and civil society, as well as ensuring the representation and protection of marginalised and vulnerable groups, in the design and implementation of AI policies and practices.
- Fostering a culture of transparency and responsibility for AI, which requires the disclosure and explanation of the data, algorithms and decisions behind AI systems, as well as the assessment and mitigation of the risks and impacts of AI.
- Promoting a vision and direction for AI, which aligns the technology with the values and interests of the society, as well as the global goals and challenges, such as the Sustainable Development Goals and the Paris Agreement.
The global summit on AI safety is a chance to make a difference.
The global summit on AI safety, which the UK will host in the autumn, is a chance for the UK government to demonstrate its commitment and leadership on AI governance, as well as to initiate and facilitate a constructive and collaborative dialogue among the international community.
The summit should not be a mere talk shop or a PR stunt, but a genuine and meaningful platform for action and change. It should aim to produce concrete outcomes and deliverables, such as:
- A global declaration or agreement on AI safety, which sets out the common values, principles and standards for AI governance, as well as the shared challenges and opportunities for AI development and deployment.
- A global action plan or roadmap on AI safety, which outlines the specific actions and measures that need to be taken by different actors and stakeholders, at different levels and domains, to ensure the safety, ethics and responsibility of AI.
- A global network or alliance on AI safety, which establishes a mechanism for coordination and cooperation among the relevant actors and stakeholders, as well as for monitoring and reporting on the progress and performance of AI governance.
The UK has a unique and historic opportunity to make a difference in the world of AI, but it also has a moral and legal obligation to do so. The UK should not waste this opportunity, nor shirk this obligation, but seize it with courage and conviction.