Proposed new EU rules on artificial intelligence recognise use for migration control as “high risk”
On 21 April, the European Commission proposed new rules on the use of artificial intelligence (AI). The proposal would ban certain uses of AI, including the use by law enforcement of real-time remote biometric identification in public spaces, with exceptions for the prosecution of serious crimes. “High-risk” uses of AI are also identified, including for predictive policing or certain uses in migration, asylum and border control. Under the proposed rules, companies or public authorities that develop or use AI for such “high risk” applications would face specific obligations, such as checks by designated national oversight bodies, for some, but not all, “high risk” uses, and registration of the technology in an EU database. The proposal has been criticised for its failure to ban all unacceptable uses of AI, its strong focus on developers of technology and less focus on users and affected people, the largely self-assessment of conformity to rules, and inadequate safeguards on “high risk” uses. The risks of the growing use of digital technology and large-scale data processing for migration control have been repeatedly highlighted by human rights experts, including the UN Special Rapporteur on racism.
Proposed new EU rules on artificial intelligence recognise use for migration control as “high risk”
On 21 April, the European Commission proposed new rules on the use of artificial intelligence (AI). The proposal would ban certain uses of AI, including the use by law enforcement of real-time remote biometric identification in public spaces, with exceptions for the prosecution of serious crimes. “High-risk” uses of AI are also identified, including for predictive policing or certain uses in migration, asylum and border control. Under the proposed rules, companies or public authorities that develop or use AI for such “high risk” applications would face specific obligations, such as checks by designated national oversight bodies, for some, but not all, “high risk” uses, and registration of the technology in an EU database. The proposal has been criticised for its failure to ban all unacceptable uses of AI, its strong focus on developers of technology and less focus on users and affected people, the largely self-assessment of conformity to rules, and inadequate safeguards on “high risk” uses. The risks of the growing use of digital technology and large-scale data processing for migration control have been repeatedly highlighted by human rights experts, including the UN Special Rapporteur on racism.