This page is maintained by Convergence Analysis
This page is maintained by Convergence Analysis
a non-profit think tank building a flourishing future for humanity in a world with transformative AI
a non-profit think tank building a flourishing future for humanity in a world with transformative AI
Resources for further learning engagement, advocacy, and positive change
Essential Background
Essential understanding of AI risks and opportunities
TED Talks
Books
Stay Informed
Ongoing news and developments
Podcasts
Youtube
Newsletters
Build Skills
Training and education opportunities
Get Involved
Get Involved
Get Involved
Volunteering and community participation
Taking Action in Canada
Canada’s promising C-27 bill - the Digital Charter Implementation Act, 2022, which included the Artificial Intelligence and Data Act (AIDA) - is dead, and a renewed effort is required by policymakers to ensure Canada contributes to global AI safety. This will only happen if voters are active.
1
Email your MP: Create and support strong AI safety regulations
Contact your MP (find via www.ourcommons.ca/Members)
Contact your MP (find via www.ourcommons.ca/Members)
Contact your MP (find via www.ourcommons.ca/Members)
Call for the creation of an act focused on improving AI safety (or for the re-introduction of or a successor to C-27). Demand that, like the EU’s AI Act, this must include explicit statutory definitions and protections for systemic and catastrophic AI risk.
Call for the creation of an act focused on improving AI safety (or for the re-introduction of or a successor to C-27). Demand that, like the EU’s AI Act, this must include explicit statutory definitions and protections for systemic and catastrophic AI risk.
Call for the creation of an act focused on improving AI safety (or for the re-introduction of or a successor to C-27). Demand that, like the EU’s AI Act, this must include explicit statutory definitions and protections for systemic and catastrophic AI risk.
Use simple language like, “Canada should formally recognize global AI risk and commit to duties like red-teaming, disclosure, and international coordination.”
Use simple language like, “Canada should formally recognize global AI risk and commit to duties like red-teaming, disclosure, and international coordination.”
Use simple language like, “Canada should formally recognize global AI risk and commit to duties like red-teaming, disclosure, and international coordination.”
2
Write to senior Ministers, such as those in Innovation, Science, and Economic Development
Contact the offices of Philip Jennings, Deputy Minister of Innovation, Science and Economic Development Canada, at philip.jennings@ised-isde.gc.ca.
Contact the offices of Philip Jennings, Deputy Minister of Innovation, Science and Economic Development Canada, at philip.jennings@ised-isde.gc.ca.
Contact the offices of Philip Jennings, Deputy Minister of Innovation, Science and Economic Development Canada, at philip.jennings@ised-isde.gc.ca.
Contact the offices of Francis Bilodeau, Associate Deputy Minister of Innovation, Science and Economic Development Canada, at francis.bilodeau@ised-isde.gc.ca.
Contact the offices of Francis Bilodeau, Associate Deputy Minister of Innovation, Science and Economic Development Canada, at francis.bilodeau@ised-isde.gc.ca.
Contact the offices of Francis Bilodeau, Associate Deputy Minister of Innovation, Science and Economic Development Canada, at francis.bilodeau@ised-isde.gc.ca.
Include messages such as those listed in 1: signal to Canada’s key AI safety agency that existential risk is a priority.
Include messages such as those listed in 1: signal to Canada’s key AI safety agency that existential risk is a priority.
Include messages such as those listed in 1: signal to Canada’s key AI safety agency that existential risk is a priority.
3
Attend or submit to CAISI consultation rounds
Monitor Canada’s AI Safety Institute (CAISI) for open calls or feedback windows.
Monitor Canada’s AI Safety Institute (CAISI) for open calls or feedback windows.
Monitor Canada’s AI Safety Institute (CAISI) for open calls or feedback windows.
These consultation rounds offer avenues to suggest systemic-risk alignments on funding, safety tests, or research priorities. Consider subscribing to ISED/CAISI alerts and send one-page input when opportunities open.
These consultation rounds offer avenues to suggest systemic-risk alignments on funding, safety tests, or research priorities. Consider subscribing to ISED/CAISI alerts and send one-page input when opportunities open.
These consultation rounds offer avenues to suggest systemic-risk alignments on funding, safety tests, or research priorities. Consider subscribing to ISED/CAISI alerts and send one-page input when opportunities open.
4
Call on G7 AI Summit delegates take decisive follow-up action
On Twitter or LinkedIn, tag Canadian G7 & GPAI delegates or MPs involved in the AI or Innovation portfolios, using hashtag #AISafety. Public visibility nudges policymakers to prioritize x-risk measures domestically and abroad.
On Twitter or LinkedIn, tag Canadian G7 & GPAI delegates or MPs involved in the AI or Innovation portfolios, using hashtag #AISafety. Public visibility nudges policymakers to prioritize x-risk measures domestically and abroad.
On Twitter or LinkedIn, tag Canadian G7 & GPAI delegates or MPs involved in the AI or Innovation portfolios, using hashtag #AISafety. Public visibility nudges policymakers to prioritize x-risk measures domestically and abroad.
Tweet something like:
“Thanks @CanadaGov! At G7 you signed on AI joint statements—please ensure this is implemented in Canadian law in a way that protects Canadians against systemic and existential risks.”
Tweet something like:
“Thanks @CanadaGov! At G7 you signed on AI joint statements—please ensure this is implemented in Canadian law in a way that protects Canadians against systemic and existential risks.”
Tweet something like:
“Thanks @CanadaGov! At G7 you signed on AI joint statements—please ensure this is implemented in Canadian law in a way that protects Canadians against systemic and existential risks.”
5
Volunteer for Effective Altruism Canada or local AI safety groups
Join EA Canada chapters in Toronto, Montréal, Vancouver, and beyond. These groups often host public outreach, letter-writing campaigns, and MP briefings.
Join EA Canada chapters in Toronto, Montréal, Vancouver, and beyond. These groups often host public outreach, letter-writing campaigns, and MP briefings.
Join EA Canada chapters in Toronto, Montréal, Vancouver, and beyond. These groups often host public outreach, letter-writing campaigns, and MP briefings.
Search “EA Canada AI safety” and sign up to volunteer or attend events such as 2024’s EAGx Toronto.
Search “EA Canada AI safety” and sign up to volunteer or attend events such as 2024’s EAGx Toronto.
Search “EA Canada AI safety” and sign up to volunteer or attend events such as 2024’s EAGx Toronto.



















