No. 134: A Human Rights-Based Approach to Transatlantic AI Governance: The Case of Biometrics Development – Working Paper

No. 134: A Human Rights-Based Approach to Transatlantic AI Governance: The Case of Biometrics Development – Working Paper

Artificial intelligence (AI) systems are increasingly deployed across critical sectors such as healthcare, finance, and public administration, driven by their potential to enhance security, enable contactless identification, and accelerate digital transformation. However, their adoption has also introduced significant risks, including mass surveillance, misidentification, discriminatory outcomes, and the indirect suppression of democratic freedoms such as expression and assembly. These risks are often overlooked during the design and development stages of such systems.
A notable example is the development of large-scale biometric technologies by companies such as Clearview AI, a United States (US)-based facial recognition company. Clearview created the largest facial recognition database to date, assembled in secrecy and with minimal to no scrutiny. Its subsequent operations in the European Union (EU) have triggered significant legal challenges for failing to comply with EU data protection rules, including those governing the design and development of such technologies. This case illustrates the growing tensions between EU and US regulatory approaches to AI governance.
This working paper examines the transatlantic regulatory divergence in the governance of AI system design and development. The example of biometric technologies developed without appropriate safeguards from the outset demonstrates the risks of inadequate regulation during the early stages of AI development. While the EU has adopted a precautionary, rights-based approach to the development of AI, which is reflected in instruments such as the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act), the US relies on a market-driven framework composed of non-binding guidelines, executive orders, and fragmented state-level legislation. These structural differences complicate efforts to build a shared regulatory model, despite common democratic values and strong economic ties.
Despite their regulatory divergences, both jurisdictions remain committed to key principles such as transparency, accountability, non-discrimination, and the rule of law. These shared commitments offer a foundation for regulatory cooperation. This paper therefore argues that a prospective transatlantic agreement on AI governance should be grounded in a human rights-based approach (HRBA), applied across the entire AI lifecycle, from concept to deployment. Such an approach offers a robust foundation for responsible AI development. Ultimately, the paper contends that only through coordinated governance based on human rights can the EU and the US effectively mitigate the risks of AI and offer a democratic counterbalance to authoritarian models of technological development.

link