Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.

The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.

Scheduled Pinned Locked Moved Uncategorized
asylumlegalmigrantshomeoffice
6 Posts 3 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
    openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
    openrightsgroup@social.openrightsgroup.org
    wrote last edited by
    #1

    The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.

    A new legal opinion for ORG finds that the use of AI tools by the UK Home Office doesn't meet legal obligations nor standards in the AI Playbook.

    We need full transparency to ensure lawful and fair decisions.

    Read more ⬇️

    https://www.independent.co.uk/news/uk/home-news/ai-artificial-intelligence-asylum-claims-backlog-b2937111.html

    #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

    openrightsgroup@social.openrightsgroup.orgO dtwx@mastodon.socialD 2 Replies Last reply
    1
    0
    • openrightsgroup@social.openrightsgroup.orgO openrightsgroup@social.openrightsgroup.org

      AI tools create a new text of interviews and material such as country of origin information.

      In the UK Home Office’s evaluation, 9% of AI summaries were so flawed they had to be removed.

      There's a significant risk that asylum decisions will be based upon and impaired by material errors of fact.

      #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

      openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
      openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
      openrightsgroup@social.openrightsgroup.org
      wrote last edited by
      #2

      Asylum applicants aren't being told that AI is used in decision-making.

      The legal opinion finds that, as a matter of procedural fairness, this is likely to be unlawful.

      It could breach data protection, as applicants don't have the opportunity to correct inaccurate summaries of personal data.

      #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

      openrightsgroup@social.openrightsgroup.orgO 1 Reply Last reply
      1
      0
      • openrightsgroup@social.openrightsgroup.orgO openrightsgroup@social.openrightsgroup.org

        The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.

        A new legal opinion for ORG finds that the use of AI tools by the UK Home Office doesn't meet legal obligations nor standards in the AI Playbook.

        We need full transparency to ensure lawful and fair decisions.

        Read more ⬇️

        https://www.independent.co.uk/news/uk/home-news/ai-artificial-intelligence-asylum-claims-backlog-b2937111.html

        #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

        openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
        openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
        openrightsgroup@social.openrightsgroup.org
        wrote last edited by
        #3

        AI tools create a new text of interviews and material such as country of origin information.

        In the UK Home Office’s evaluation, 9% of AI summaries were so flawed they had to be removed.

        There's a significant risk that asylum decisions will be based upon and impaired by material errors of fact.

        #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

        openrightsgroup@social.openrightsgroup.orgO 1 Reply Last reply
        1
        0
        • openrightsgroup@social.openrightsgroup.orgO openrightsgroup@social.openrightsgroup.org

          Asylum applicants aren't being told that AI is used in decision-making.

          The legal opinion finds that, as a matter of procedural fairness, this is likely to be unlawful.

          It could breach data protection, as applicants don't have the opportunity to correct inaccurate summaries of personal data.

          #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

          openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
          openrightsgroup@social.openrightsgroup.orgO This user is from outside of this forum
          openrightsgroup@social.openrightsgroup.org
          wrote last edited by
          #4

          “Technology can assist decision-making, but it cannot undermine the careful human judgment required in asylum cases.

          Where AI tools are used without adequate safeguards, there is a real risk that unlawful or unfair decisions may result.

          If AI tools are influencing asylum decisions, there must be full transparency about how those systems operate and how their outputs are used."

          🗣️ Robin Allen KC and Dee Masters, Cloisters Chambers.

          #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

          craigduncan@mastodon.auC 1 Reply Last reply
          1
          0
          • openrightsgroup@social.openrightsgroup.orgO openrightsgroup@social.openrightsgroup.org

            The failure to inform asylum applicants of the use of AI in decision-making is likely UNLAWFUL.

            A new legal opinion for ORG finds that the use of AI tools by the UK Home Office doesn't meet legal obligations nor standards in the AI Playbook.

            We need full transparency to ensure lawful and fair decisions.

            Read more ⬇️

            https://www.independent.co.uk/news/uk/home-news/ai-artificial-intelligence-asylum-claims-backlog-b2937111.html

            #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

            dtwx@mastodon.socialD This user is from outside of this forum
            dtwx@mastodon.socialD This user is from outside of this forum
            dtwx@mastodon.social
            wrote last edited by
            #5

            @openrightsgroup if an automated AI is deciding what was said in a meeting, and the outputs of that are used in decision making, then arguably counts as "automated decision making" under the GDPR?

            1 Reply Last reply
            0
            • em0nm4stodon@infosec.exchangeE em0nm4stodon@infosec.exchange shared this topic
            • openrightsgroup@social.openrightsgroup.orgO openrightsgroup@social.openrightsgroup.org

              “Technology can assist decision-making, but it cannot undermine the careful human judgment required in asylum cases.

              Where AI tools are used without adequate safeguards, there is a real risk that unlawful or unfair decisions may result.

              If AI tools are influencing asylum decisions, there must be full transparency about how those systems operate and how their outputs are used."

              🗣️ Robin Allen KC and Dee Masters, Cloisters Chambers.

              #asylum #ai #legal #migrants #homeoffice #ukpolitics #ukpol

              craigduncan@mastodon.auC This user is from outside of this forum
              craigduncan@mastodon.auC This user is from outside of this forum
              craigduncan@mastodon.au
              wrote last edited by
              #6

              @openrightsgroup

              Just don't use them. Apply the same cost-benefit approach as probative value v prejudice for evidence. The risk of prejudice is too high to justify some general belief AI makes life easier for the state.

              1 Reply Last reply
              0
              • R relay@relay.mycrowd.ca shared this topic
                R relay@relay.infosec.exchange shared this topic
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups