Open Letter from the United Nations High Commissioner for Human Rights, published on 08/11/2023 – accessible here.
Mesdames, Sirs,
I wish to refer to the discussions on the proposed adoption of a European Union Artificial Intelligence Act (AI Act). I commend the European Union’s ambition with this proposal, which is an opportunity to further strengthen the protection of people’s rights. As one of the first major regulatory attempts regarding AI, the AI Act will not only have an impact within the European Union and its Member States but would influence other regulatory frameworks on AI around the world.
The United Nations and the European Union share the commitment to respecting, protecting and promoting human rights. International human rights law needs to be the guiding compass at a time when AI applications are becoming increasingly capable and are being deployed across all sectors, affecting the lives of everyone. By firmly grounding new rules for the use of AI in human rights, the European Union would strengthen human rights protection in the face of ever-increasing use of AI applications in our everyday lives. Mindful of the importance of this opportunity, and in response to proposals that have emerged during the legislative debates, I am pleased to share a human rights-based analysis of and recommendations from my Office regarding the AI Act in an annex to this letter.
My Office is ready to support the institutions of the European Union and its Member States in the process of finalizing the AI Act and to ensure that it will deliver on its promise to protect and promote the human rights of all.
Please accept, Mesdames, Sirs, the assurance of my highest consideration.
Volker Türk
European Commission
European Parliament
Council of the European Union
ANNEX
The Office of the United Nations High Commissioner for Human Rights (“OHCHR”) wishes to share the following analysis and recommendations with respect to the currently negotiated European Union Artificial Intelligence Act (AI Act) and its possible human rights implications. It should not be understood as an exhaustive list of human rights issues but those that OHCHR considers that most relevant in the current stage of the AI Act.
High-risk classifications
The AI Act distinguishes several risk classes, which have a bearing on the obligations of AI developers, suppliers and deployers. It is indeed necessary to provide stronger safeguards for AI applications that have greater potential to result in human rights harms. In this context, OHCHR would like to underscore that the determination of risks should relate to the actual or foreseeable adverse impacts of an AI application on human rights and not be exclusively technical or safety-oriented: AI systems that carry significant risks for the enjoyment of human rights should be considered high-risk, with all associated obligations for their providers and users.
OHCHR would like to express its concern at a proposal, according to which companies would be allowed to self-determine that their AI system would not be in the high-risk category, and hence opt out of the more stringent requirements for high-risk classes. Such a model of self-assessment of risk would introduce considerable legal uncertainty, undercut enforcement and accountability, and thereby eventually risk undermining the core benefits of the AI Act.
Also, in accordance with international human rights law, including the UN Guiding Principles on Business and Human Rights, States must ensure that their legislative frameworks regulating AI products and services provide for adequate oversight mechanisms that can address actual or foreseeable human rights impacts. The involvement of technology companies in delivering of public goods is already a major area of human rights concern for many stakeholders, including many technology companies.
Stringent limits to the use of biometric surveillance and individualized crime prediction
The growth in the development and deployment of ever-more powerful AI systems for the purpose of surveillance is a growing human rights concern worldwide. OHCHR strongly supports efforts at strictly limiting, or prohibiting as appropriate, in accordance with applicable international law, highly intrusive surveillance practices, which would otherwise threaten the very core of human dignity, privacy, and democracy.
Remote biometric surveillance systems, in particular, raise serious concerns with regard to their proportionality, given their highly intrusive nature and broad impact on large numbers of people. For example, law enforcement’s use of facial recognition tools scanning crowds or protests is indiscriminate, bringing about unacceptable risk to human rights. Against this background OHCHR welcomes the Parliament’s strong position in this matter. Moreover, OHCHR supports a ban on the use of biometric recognition tools and other systems that process the biometric data of people to categorize them based on the color of their skin, gender, or other protected characteristics. Further, OHCHR supports bans on AI systems that seek to infer people’s emotions, individualized crime prediction tools, and untargeted scraping tools to build or expand facial recognition databases. Such tools entail dangerous accuracy issues, often due to a lack of scientific grounding, and are deeply intrusive. They threaten to systematically undermine human rights, in particular due process and judicial guarantees.
Fundamental rights impact assessments
OHCHR also would like to express its strong support for the European Parliament’s proposal for comprehensive fundamental rights impact assessments (FRIA). A solid and sound FRIA for both public and private actors deploying AI is a key part of grounding AI regulation in human rights. Given the serious risks that AI systems can bring to the enjoyment of human rights and the need for meaningful actions to mitigate such impacts, proposals aimed at weakening or removing the rights risk assessment requirements are very concerning. A meaningful FRIA should cover the entire AI life-cycle, be based on clear parameters about the assessment of the impact of AI on fundamental rights; transparency about the results of the impact assessments; participation of affected people; and involvement of independent public authorities in the impact assessment.
Assessing adverse impacts on human rights is a core component of human rights due diligence, both for private and public sector use of AI, and follows from international human rights law and the UN Guiding Principles on Business and Human Rights.1 OHCHR’s work with leading AI companies in the context of the UN Human Rights B-Tech project demonstrates that assessing adverse impacts of AI on people is the “art of the feasible”. The fact that some companies at the forefront of AI development are endorsing and implementing a rights-based approach to risk management points to fundamental rights as a promising foundation for rights-respecting AI practices also in the EU AI Act context.
Technical standards
OHCHR would like to draw your attention to the complex role of standard-setting organizations as envisaged in the drafts of the AI Act. According to the draft, standards should be a means for providers to demonstrate conformity with the requirements of the AI Act. In other words, standard-setting organizations will have a pivotal function with far-reaching effects on the enjoyment of human rights. This makes it particularly important to ensure utmost transparency, including free public access to all relevant documentation and adopted standards, meaningful access for all stakeholders to standard-setting processes under that AI Act, and effective accountability mechanisms, such as judicial oversight and access to an effective remedy. OHCHR’s recent reporting has shown that further efforts are urgently needed to better integrate human rights expertise in technical standard-setting processes.2 Given a general lack of resources for civil society stakeholders to engage in a sustainable manner in standard-setting processes, it is also recommended that the EU set up mechanisms that provide material and other support to under-resourced stakeholders.
Holistic approach to AI harms in all areas
Human rights protections apply to everyone within the territory and jurisdiction of concerned States. In view of this, OHCHR notes with concern the proposal by the Council of a blanket exemption from the AI Act for AI systems that are developed or used for national security purposes, as well as exceptions from the AI Act for law enforcement and border control. This would exempt fields of application where AI is widely used, where the need for safeguards is particularly urgent, and where there is evidence that the existing use of AI systems disproportionately targets individuals in already marginalized communities. The contexts of law enforcement, national security and migration control often involve intrusive measures and tools that would require enhanced safeguards and due process guarantees, not less.3 Exemption and exception of these areas would create a substantial and extremely concerning gap in human rights protection under the AI Act.
OHCHR stands ready to work with all institutions of the European Union to ensure that fundamental human rights considerations inform the development and implementation of the EU AI Act and the European Union’s broader digital framework.