Question Period Note: CANADIAN ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE
About
- Reference number:
- AIDI-2026-QP-00001
- Date received:
- Sep 15, 2025
- Organization:
- Innovation, Science and Economic Development Canada
- Name of Minister:
- Solomon, Evan (Hon.)
- Title of Minister:
- Minister of Artificial Intelligence and Digital Innovation
Issue/Question:
Why did the Government of Canada create the Canadian Artificial Safety Institute?
Suggested Response:
• Rapid advances in artificial intelligence (AI) have underscored the promise associated with this transformative technology, but also that its power can surface questions about risk and safety. Ensuring safety is critical to building public trust in AI systems.
• In November 2024, the Government of Canada created the Canadian Artificial Intelligence Safety Institute to advance scientific understanding of the risks associated with the most advanced AI systems and provide tools to address those risks.
• To do this, the institute leverages Canada’s world-class AI research ecosystem through the Canadian Institute for Advanced Research (CIFAR), the National Research Council (NRC), and collaborates with partner institutes around the world.
Background:
• Ensuring the safety of the most powerful artificial intelligence (AI) systems is a priority for Canada and its closest partners.
• Given that AI capabilities are not fully understood and constantly evolving, there is a clear need to better understand risks and develop measures to mitigate them.
• The Canadian Artificial Intelligence Safety Institute is a founding member of the International Network of AI safety Institutes, which includes counterpart organizations in the U.S., UK, EU, France, Japan, Korea, Singapore, Australia, and Kenya.
• The Institute leverages Canada’s AI research community to advance the science of AI safety. CAISI’s key research priorities for 2025-26 include risk assessment of AI systems, studying how AI systems work and interact with the real world, and developing new techniques to make AI systems safer. It has initiated work under all three priority areas in this year.
• The Institute also seeks to advance international collaboration by working with members of the network on joint model evaluation exercises and build a robust program of research that contributes to advancing knowledge in the field of AI safety, including a $1 million collaborative partnership signed with the UK AI Security Institute in July 2025 to fund research on AI alignment
• The Institute is also establishing partnerships with major stakeholders in the AI industry to collaborate on AI safety research and testing, including the recent announcement of a Memorandum of Understanding (MOU) with Canadian AI company Cohere.
• Through these actions, the institute aims to provide guidance and tools to support policy priorities across the government, as well as communicate AI safety information and build awareness and trust among Canadians and stakeholders.
Additional Information:
With the rapid advancement of artificial intelligence (AI), ensuring the safety and secure deployment of powerful artificial intelligence (AI) systems has emerged as a global priority.
At the United Kingdom AI Safety Summit in November 2023, the Bletchley Declaration underscored the increasing risks posed by frontier AI technologies, including the potential for misuse in disinformation campaigns, cybersecurity breaches, and even bioweapon development. These concerns were echoed at the Seoul AI Safety Summit in May 2024, where global leaders discussed the need for international cooperation on AI safety. While the AI Action Summit in Paris in February 2025 placed greater emphasis on supporting adoption and innovation, Canada and its closest allies recognize the continued importance of building trust in AI and are committed to supporting the responsible development and deployment of AI systems.
Launched in 2024 with an initial investment of $50 million over 5 years, CAISI is part of the government's broader strategy to promote responsible AI development in Canada, addressing societal, technical, and ethical challenges. Since its establishment, CASI has initiated collaboration with stakeholders from the public and private sectors, academia, and civil society, ensuring a multidisciplinary approach to AI safety research. CAISI focusses on key research areas such as AI model risk assessment and evaluation frameworks, technical mitigation strategies for AI risks, and designing safer AI systems.
CAISI’s operational model is designed to leverage existing resources and partnerships. The institute is housed at Innovation, Science, and Economic Development Canada, with a dedicated office responsible for leading policy coordination and international engagement through which CAISI engages Canada’s robust and world leading AI research community, including the three national AI Institutes (Amii, MILA and Vector) to conduct cutting edge research on AI safety issues. Research activities are conducted through two streams.
The first stream, of investigator-led research, is managed through a Contribution Agreement with the Canadian Institute for Advanced Research (CIFAR), enabling Canadian and international experts to explore critical AI safety questions. To date, CAISI’s collaboration with CIFAR in the first research stream has advanced two new research funding initiatives:
• The Catalyst Grant Program, for high-risk, high-reward initiatives. Ten new projects were announced in June 2025, focusing on combatting misinformation, developing trustworthy AI models aligned with human values, and ensuring real-world safety in AI systems.
• The Solution Network Program, for collaborative research over a longer period of time. Two new Solution Network projects launched in November 2025, one addressing the AI safety challenges of synthetic content infiltrating the Canadian justice system, and the other helping build safer and more equitable AI models for linguistic minorities by addressing dialect bias in AI.
The second stream, of government-directed projects, has been implemented through a Memorandum of Understanding (MOU) with the National Research Council (NRC), and includes collaborative projects with domestic and international partners. To date, CAISI hasinitiated13 projects.which focus on AI safety issues, such as deepfake detection, safe robotic automation, exploring the risks with autonomous AI agents, as well as developing benchmarks and evaluations for AI models.
CAISI is a pioneering member of the International Network of AISIs comprising equivalent offices in ten jurisdictions, including the United States (US), the United Kingdom (UK), the European Union (EU), France, Australia, Singapore, Japan, South Korea, and Kenya. This Network will coordinate efforts to advance a collective understanding of AI safety, address the risks of cutting-edge AI systems, and contribute to the development of internationally recognized AI safety standards that could form the backbone of AI safety policies across jurisdictions and markets. The Network was launched in November 2024 in San Francisco. CAISI hosted the third in-person meeting of the Network in Vancouver, on the sidelines of the International Conference on Machine Learning (ICML), at which decisions were made on the Network’s priorities, governance, and collaborations.
CAISI co-leads the Network’s research track on the risks posed by AI-generated synthetic content with Australia, and published the Network’s Synthetic Content Research Agenda in July 2025. CAISI is also participating in multilingual model evaluations led by Singapore and the UK, with contributions in Cantonese, Farsi, and Telugu; and jointly published model testing results and a risk assessment tool inventory at the Paris AI Action Summit (Feb 2025).
CAISI will continue to advance research on AI safety, including deepening collaborations with AI developers and international partners. This ongoing work on AI safety will help to build public trust, which is essential for successful AI adoption and innovation.