The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.
-
The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.
A new legal opinion for ORG finds that the use of AI tools by the UK Home Office doesn't meet legal obligations nor standards in the AI Playbook.
We need full transparency to ensure lawful and fair decisions.
Read more
️
Home Office use of AI in asylum cases could be unlawful, legal experts warn
Exclusive: Caseworkers use AI to summarise asylum interviews and to search for information about country of origin
The Independent (www.independent.co.uk)
-
AI tools create a new text of interviews and material such as country of origin information.
In the UK Home Office’s evaluation, 9% of AI summaries were so flawed they had to be removed.
There's a significant risk that asylum decisions will be based upon and impaired by material errors of fact.
Asylum applicants aren't being told that AI is used in decision-making.
The legal opinion finds that, as a matter of procedural fairness, this is likely to be unlawful.
It could breach data protection, as applicants don't have the opportunity to correct inaccurate summaries of personal data.
-
The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.
A new legal opinion for ORG finds that the use of AI tools by the UK Home Office doesn't meet legal obligations nor standards in the AI Playbook.
We need full transparency to ensure lawful and fair decisions.
Read more
️
Home Office use of AI in asylum cases could be unlawful, legal experts warn
Exclusive: Caseworkers use AI to summarise asylum interviews and to search for information about country of origin
The Independent (www.independent.co.uk)
AI tools create a new text of interviews and material such as country of origin information.
In the UK Home Office’s evaluation, 9% of AI summaries were so flawed they had to be removed.
There's a significant risk that asylum decisions will be based upon and impaired by material errors of fact.
-
Asylum applicants aren't being told that AI is used in decision-making.
The legal opinion finds that, as a matter of procedural fairness, this is likely to be unlawful.
It could breach data protection, as applicants don't have the opportunity to correct inaccurate summaries of personal data.
“Technology can assist decision-making, but it cannot undermine the careful human judgment required in asylum cases.
Where AI tools are used without adequate safeguards, there is a real risk that unlawful or unfair decisions may result.
If AI tools are influencing asylum decisions, there must be full transparency about how those systems operate and how their outputs are used."
️ Robin Allen KC and Dee Masters, Cloisters Chambers. -
The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.
A new legal opinion for ORG finds that the use of AI tools by the UK Home Office doesn't meet legal obligations nor standards in the AI Playbook.
We need full transparency to ensure lawful and fair decisions.
Read more
️
Home Office use of AI in asylum cases could be unlawful, legal experts warn
Exclusive: Caseworkers use AI to summarise asylum interviews and to search for information about country of origin
The Independent (www.independent.co.uk)
@openrightsgroup if an automated AI is deciding what was said in a meeting, and the outputs of that are used in decision making, then arguably counts as "automated decision making" under the GDPR?
-
E em0nm4stodon@infosec.exchange shared this topic
-
“Technology can assist decision-making, but it cannot undermine the careful human judgment required in asylum cases.
Where AI tools are used without adequate safeguards, there is a real risk that unlawful or unfair decisions may result.
If AI tools are influencing asylum decisions, there must be full transparency about how those systems operate and how their outputs are used."
️ Robin Allen KC and Dee Masters, Cloisters Chambers.Just don't use them. Apply the same cost-benefit approach as probative value v prejudice for evidence. The risk of prejudice is too high to justify some general belief AI makes life easier for the state.
-
R relay@relay.mycrowd.ca shared this topicR relay@relay.infosec.exchange shared this topic