Question Period Note: CANADIAN ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE
About
- Reference number:
- ISI-2024-QP-00036
- Date received:
- Nov 25, 2024
- Organization:
- Innovation, Science and Economic Development Canada
- Name of Minister:
- Champagne, François-Philippe (Hon.)
- Title of Minister:
- Minister of Innovation, Science and Industry
Issue/Question:
How is the AI safety institute announced in Budget 2024 going to help mitigate the risks associated with the adoption of artificial intelligence?
Suggested Response:
• With artificial intelligence (AI) comes a need to ensure the safety of the most powerful AI systems.
• In November 2024, the Government of Canada announced the establishment of the Canadian Artificial Intelligence Safety Institute.
• First announced as part of Budget 2024, the institute will help advance the knowledge and understanding of risks associated with the most advanced AI systems, as well as develop measures to reduce those risks.
• The institute will leverage the robust Canadian AI ecosystem and work collaboratively with AI safety institutes around the world as part of a new international network.
Background:
With the rapid advancement of artificial intelligence (AI), ensuring the safety and ethical deployment of powerful artificial intelligence (AI) systems has emerged as a global priority. At the United Kingdom AI Safety Summit, the Bletchley Declaration underscored the increasing risks posed by frontier AI technologies, including the potential for misuse in disinformation campaigns, cybersecurity breaches, and even bioweapon development. The declaration highlighted the need for better alignment of AI systems with human objectives, especially as capabilities continue to evolve unpredictably. These concerns were further echoed at the Seoul Summit, where global leaders discussed the need for international cooperation on AI safety. Canada, alongside its closest allies, is committed to mitigating these risks and ensuring that AI systems are developed and used responsibly.
The Government of Canada launched the Canadian AI Safety Institute (CAISI) with an initial investment of $50 million over five years announced in Budget 2024. This investment marks a significant step toward safeguarding AI development in Canada. CAISI will bolster Canada’s capacity to address AI safety risks, further positioning the country as a leader in the responsible and ethical development of AI technologies. CAISI will also collaborate with safety institutes in other jurisdictions as part of a new International Network of AI Safety Institutes, which met for the first time on November 20-21, 2024 in San Francisco.
CAISI is part of the government's broader strategy to promote responsible AI development in Canada, addressing societal, technical, and ethical challenges. The institute will collaborate with stakeholders from the public and private sectors, academia, and civil society, ensuring a multidisciplinary approach to AI safety research. CAISI will focus on key research areas such as AI model risk assessment, evaluation frameworks, and ethical governance of synthetic content, working with global partners to address AI challenges responsibly.
CAISI’s operational model is designed to leverage existing resources and partnerships. The institute will be housed at Innovation, Science, and Economic Development Canada, with a dedicated office responsible for leading policy coordination and international engagement. Research activities will be conducted through two streams. The first, investigator-led research, will be managed through a Contribution Agreement with Canadian Institute for Advanced Research (CIFAR), enabling Canadian and international experts to explore critical AI safety questions. The second stream, government-directed projects, will be implemented through a Memorandum of Understanding (MOU) with the National Research Council (NRC), focusing on projects that address direct government priorities, including international collaboration.
Canada’s leadership in AI safety was further solidified at the AI Safety Summit in San Francisco in November 2024, where CAISI played a key role in shaping global collaboration on AI risk mitigation. This summit brought together leading AI safety institutes from around the world to discuss critical issues, including model evaluation, risk assessment, and the management of synthetic content. CAISI co-led the synthetic content risk management track. Canada’s active participation and leadership in these discussions will strengthen its role within the growing global network.
Additional Information:
If pressed on international collaboration
• Canada and its closest partners have made it a priority to ensure the safety of the most powerful AI systems for the coming years.
• The inaugural United Kingdom (UK) AI Safety Summit last year, and resulting Bletchley Declaration, highlighted growing global concern over risks arising from the development and use of AI systems.
• Such systems could be intentionally used by bad actors to create disinformation, evade existing cybersecurity measures, or more easily develop bioweapons.
• Risks also stem from the challenge of aligning increasingly capable systems with human-set objectives. Given that AI capabilities are not fully understood and constantly being developed, there is a clear need to better understand risks and develop mitigation measures.
• The Canadian Artificial Intelligence Safety Institute is a founding member of the growing network of AI safety institutes including AI safety institutes in the U.S., UK, EU, Japan, Korea, and Australia. Through the network, Canada will participate in joint projects with other countries including sharing our findings to multiply our impact.
• The inaugural convening of the International Network of AI Safety Institutes took place November 20-21, 2024 to coordinate work and discuss priorities.
• CAISI intends to work closely with international partners and consult a diverse set of stakeholders as it develops its research priorities.