Question Period Note: ARTIFICIAL INTELLIGENCE AND AUTOMATION FOR IMMIGRATION DECISIONS
About
- Reference number:
- IRCC - 2023-QP-00062
- Date received:
- Oct 17, 2023
- Organization:
- Immigration, Refugees and Citizenship Canada
- Name of Minister:
- Miller, Marc (Hon.)
- Title of Minister:
- Minister of Immigration, Refugees and Citizenship
Issue/Question:
Concerns about IRCC’s use of automation and artificial intelligence in decision-making on client applications
Suggested Response:
• IRCC uses advanced analytics and other automated systems to help officers identify routine applications for streamlined processing, as well as to perform other functions, such as the sorting of applications based on common characteristics.
• These systems do not use opaque artificial intelligence (AI), do not automatically learn or adjust on their own, and are not used to refuse any applications or deny entry to Canada.
• IRCC officers remain central to immigration processing and continue to exercise their delegated authority in decision-making.
• The use of advanced analytics enables IRCC to automate some processing steps for routine applications. By leveraging technology, IRCC is able to direct officer resources toward more complex or sensitive applications, and increase the efficiency of our processing.
If pressed:
The Integrity Risk Analysis Tool (ITAT) uses data analytics to uncover risk patterns, which the Department then incorporates into its risk indicators used to screen incoming applications. ITAT’s outputs are not shared directly with officers making decisions on individual applications, and ITAT does not automatically put “red flags” on certain clients’ files.
A two-step, “human-in-the-loop” process helps to guard against the risk that officers might refuse an application simply because it was flagged by an automated system.
ITAT only reviews data in client applications. It does not automate or replace human decision-making, and it is not used for investigations.
Background:
IRCC use of automated systems
• IRCC is using a number of innovative approaches to manage high application volumes, improve service delivery and enhance the client experience.
• In support of our clients, we are using digital tools to create processing efficiencies where appropriate. IRCC’s approach to automated systems is, by and large, facilitative in nature. Models are put in place to streamline processing and approve straightforward cases, never to automatically refuse applicants. All models go through a rigorous review process before implementation, to ensure they are equitable, explainable, privacy protecting and technically sound.
• IRCC’s use of automated systems can be divided into the following broad categories:
o Automating positive eligibility determinations
o Distributing applications between officers based on available capacity or the characteristics of the application (e.g. one that requires a decision-maker with local knowledge)
o Identifying applications that may require additional verification
o Creating ‘annotations’ that summarize basic information on each client to reduce officer searches in the Global Case Management System
o Triaging client emails to enable faster replies, and responding to client enquiries by providing publicly available information
o Assessing biometrics
• At this time, none of IRCC’s automated systems can refuse an application, nor can they recommend a refusal to an officer. All final decisions to refuse applications are made by officers after thorough review. Officers are provided with training on IRCC’s automated decision support tools, in order to ensure they understand that a lack of an automated approval does not constitute a recommendation to refuse an application.
• IRCC does not use generative AI tools, such as ChatGPT, in support of decision-making on client applications. The Department is exploring potential benefits to using generative AI in certain other capacities, such as synthesizing information in support of research and policy development. IRCC is approaching generative AI usage with an emphasis on caution, in line with guidance for public servants provided by Treasury Board Secretariat.
Key projects
• In 2018, IRCC began using data analytics to help officers triage online Temporary Resident Visa (TRV) applications from China and India. In January 2022, the China and India models were updated, and a third model was introduced for TRV applications from all other countries. All three of these advanced analytics models function in the same way: they sort incoming files to streamline officer review, and they automatically approve the eligibility portion of certain straightforward applications.
• In spring 2021, IRCC launched another advanced analytics project for in-Canada Family Class spousal and common-law applications. Similar to the TRV models, this project aimed to speed up processing by triaging applications and automating some positive eligibility determinations. All applications that do not receive an automated eligibility approval by the model are sent for an individualized assessment by officers in accordance with standard practice. IRCC is moving ahead with plans to expand the use of advanced analytics to cover the rest of the Family Class spousal and common-law caseload.
• The Integrity Trends Analysis Tool (ITAT), recently covered in the media, was developed as part of broader departmental efforts to become a more data-driven organization – one that makes evidence-based decisions with efficiency, consistency and a commitment to program integrity.
• ITAT analyzes factual data IRCC collects in its Global Case Management System and extract risks and fraud patterns. This allows risk assessment officers to invest their efforts to identify, validate and take action on fraud and risk patterns more wisely. Much like financial institutions use analytics to guard against credit card fraud, the Government of Canada has adopted industry standard technologies to protect the safety and security of Canadians, including making sure that immigration programs are not exploited by those seeking to enter Canada illegally.
• ITAT does not make or recommend decisions, nor does it present information directly to officers making decisions on applications. Rather, it detects fraud patterns that form part of a larger set of indicators that have always formed part of IRCC’s decision-making process.
• Media coverage has incorrectly suggested that IRCC’s ‘Chinook’ tool employs advanced analytics or artificial intelligence to automate decisions. Chinook is a processing aid that extracts basic information from client applications and displays it in a clear format that is more user-friendly for officers. Chinook is not powered by artificial intelligence or advanced analytics, and does not make or recommend decisions on applications.
Ensuring responsible use of data-driven technologies
• The Department has developed detailed guidance, including a Policy Playbook on Automated Support for Decision-making, to help consider how these technologies can be used responsibly, effectively and efficiently. IRCC has also established an internal governance framework to ensure that new decision support tools go through a rigorous review and approval process. A broad-based committee of senior executives acts as the key oversight body in this governance framework.
• IRCC is always working to be a leader in the responsible use of data-driven technologies, and has developed its approach to align with the Treasury Board Directive on Automated Decision-Making, as well as other key legal and privacy requirements. To date, IRCC has published more Algorithmic Impact Assessments than any other federal department or agency.
• Protecting individuals’ personal information continues to be a priority for IRCC as the Department tests new and innovative approaches. The use of personal information for analytics-based processing is in accordance with the Immigration and Refugee Protection Act and the Privacy Act. Its use is consistent with the purpose for which it was initially collected.
Additional Information:
None