Reference number:
Date received:
May 21, 2023
Privy Council Office
Name of Minister:
Trudeau, Justin (Right Hon.)
Title of Minister:
Prime Minister

Suggested Response:

• As artificial intelligence systems become more sophisticated, it is crucial that governments take steps to provide guardrails around the development and use of this technology, and to make sure that Canadians benefit from it.
• The Government of Canada takes the risks from AI systems very seriously, including both risks to individuals, such as discriminatory impacts on historically marginalized groups, and broader potential impacts on society.
• This is why the government has tabled the Artificial Intelligence and Data Act (AIDA), part of Bill C-27.
• It is critical that we pass the AIDA as quickly as possible, to address the mounting risks that AI systems pose and to ensure that Canadians can trust the AI systems used across the economy.
We are fully engaged in the Hiroshima AI Process, coming out of the recent G7 Meeting in Japan, and committed to working with like-minded countries on efforts to fuel trust and safety as AI develops.


• Since the release of OpenAI’s ChatGPT, in November 2022, concerns have mounted regarding the potential impact of advanced AI systems.

o ChatGPT is capable of producing content, including answers to questions, that appear plausible and can be difficult to distinguish from human-generated.

o Other systems that generate images or video have raised the risk of fuelling misinformation and other harms to society.

• An open letter was issued on May 30, 2023, signed by a number of executives at large technology companies and prominent Canadian researchers warning of societal-scale risks due to AI systems, including the potential extinction of humanity.

o This follows another open letter issued by the Future of Life Institute in March 2023, signed by some of the signatories of this letter, calling for a “pause on giant AI experiments.”

• The AIDA was tabled as part of Bill C-27 in June 2022. It takes a risk-based approach to AI regulation, proposing to mitigate risk through proposing a set of standards that would apply to “high-impact” AI systems.

o The AIDA recently passed second reading in the House of Commons, and is now headed to committee for further debate.

o The AIDA was originally focused on risks with regard to health, safety, and discriminatory impacts of AI systems, although an amendment strategy is being proposed in response to stakeholder comments that would allow broader risks to be addressed.

• The AIDA’s approach is consistent with international frameworks on AI regulation, including the EU’s AI Act.

Additional Information: