Strategic Compliance: Revising Canada’s Artificial Intelligence and Data Act to Meet EU Human Rights and Data Protection Benchmarks
Alex Macfarlane, Volume 83 Senior Editor
1. Introduction
Of all the potential impacts of artificial intelligence (AI), human rights concerns emerge as an especially serious issue. However, Canada’s Artificial Intelligence and Data Act (AIDA), which would represent Canada’s first framework to regulate AI,[1] does not address these concerns. Whether this omission was a genuine oversight or a deliberate move to avoid jurisdictional conflict with the provinces—that enjoy exclusive jurisdiction over civil rights[2]—it will have significant implications both within Canada’s borders and beyond.
AIDA’s human rights gap also creates risks for Canada’s economic interests with the European Union (EU). The EU’s approach to AI regulation in the EU AI Act[3] prioritizes safeguarding fundamental rights against AI’s potential harms. The EU also sets high standards for data protection, especially with regard to the transfer of personal data outside the EU. While the EU has granted Adequacy status under its General Data Protection Regulation (GDPR),[4] which provides for the free flow of personal data from the EU without further safeguards, AIDA’s omission of human rights considerations has given the EU cause for reconsideration, putting the flow of data between Canada and the EU at risk.
To avoid losing adequacy status, Canada should clearly and expressly address human rights concerns in AIDA. A preamble directly acknowledging AI’s impacts on human rights and a human rights impact assessment mechanism similar to that of the EU AI Act are good places to start. These changes would serve as strategic levers to maintain Canada’s adequacy status with the EU.
This article first discusses AIDA’s proposed approach to regulating AI and emphasizes its failure to meaningfully grapple with the human rights concerns presented by AI systems. It then contrasts AIDA’s omission with the EU’s focus on human and fundamental rights in its drafting of the EU AI Act. It then turns to the EU’s data protection and transfer regime under the GDPR and applies these standards to AIDA’s current drafting. The paper concludes by outlining possible amendments to AIDA that would address its shortcomings in relation to EU standards.
2. AIDA and Human Rights
If passed[5], AIDA would be Canada’s first extensive framework for AI regulation in the private sector.[6] AIDA’sdesign was claimed to promote AI technology while safeguarding individual rights and public interests.[7] Yet, as written, AIDA does not meaningfully recognize the potential human rights implications of AI. AIDA’s current draft makes no specific mention of human rights. At best, the Canadian Human Rights Act is referenced in AIDA‘s definition of “biased output,” creating obligations around biased output and addressing the right to be free from discrimination, but not human rights generally.[8] Notably, AIDA does not address the significant privacy implications of AI.[9] Unlike the EU AI Act, for example, AIDA does not limit certain uses of AI surveillance, such as real-time biometric surveillance in public spaces.[10]
3. The EU AI Act and Human Rights
The EU AI Act is the first comprehensive attempt at regulating AI by any major regulator.[11] Unlike AIDA, the EU AI Act addresses AI’s potential impact on human rights in several ways. These differences are important because they raise questions of whether AIDA provides protection “essentially equivalent” to that of the EU. Without essentially equivalent protection, Canada may lose its valuable adequacy status with the EU.
The EU AI Act’s preamble centers on upholding rights, stating, for instance, that it aims to ensure trustworthy and safe AI is developed and used in compliance with fundamental rights obligations.[12] The preamble further recognizes that AI may generate risks and cause harm to public interests and fundamental rights, contemplating physical, psychological, societal or economic impacts.[13] Later, Article 1(1) provides that the purpose of the EU AI Act is to ensure a high level of protection for health, safety and fundamental rights.[14] Note that while the EU AI Act uses the term “fundamental rights” and not “human rights,” the terms largely refer to the same subject matter, and will generally be used interchangeably in this analysis.[15]
Article 27 requires that certain deployers of AI systems conduct a “fundamental rights impact assessment,” further demonstrating the EU’s consideration of human rights in the AI context.[16] The obligations under Article 27 apply to deployers governed by public law, private actors providing public services, and banking and insurance service providers that utilize AI systems listed as high-risk.[17] The impact assessment consists of: (a) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose; (b) a description of the period of time and frequency in which each high-risk AI system is intended to be used; (c) the categories of natural persons and groups likely to be affected by its use in the specific context; (d) the specific risks of harm likely to impact the categories of persons or group of persons identified; (e) a description of the implementation of human oversight measures, according to the instructions of use; and (f) the measures to be taken in case of the materialization of these risks, including their arrangements for internal governance and complaint mechanisms.[18]
4. EU Data Regulation and Third Country Transfers
Chapter V of the GDPR[19] regulates the transfer of personal data outside the EU and provides the standards for granting adequacy status to third countries, which Canada currently enjoys. The guiding principle of the rules set out in Chapter V, found in Article 44, is “to ensure that the level of protection of natural persons guaranteed by [the GDPR] is not undermined.”[20] This principle is read in conjunction with the GDPR’s general goal to both “protect fundamental rights and freedoms of natural persons”[21] and enable the “free movement of personal data within the Union.”[22]
The GDPR provides a multi-tiered framework to facilitate the transfer of personal data outside the EU. The first tier, covered by Article 45, sets out the conditions under which personal data can be transferred following an adequacy decision. Article 46 provides the second tier of requirements for data transfers subject to “appropriate safeguards,” notably those without an adequacy decision under Article 45. Finally, Article 49 outlines “derogations” that, in the absence of an adequacy decision or appropriate safeguards, allow for the transfer of data outside the EU.
The adequacy assessment under Article 45 authorizes the European Commission to declare a third country’s data protection mechanisms “adequate” under the GDPR, allowing the third country to transfer data from the EU without further safeguards. In making an adequacy decision, the Commission must consider various factors, including the rule of law, human rights, and protections for fundamental freedoms in the third country.[23] It examines the general and sector-specific legislation of the third country, especially regarding public security, defence, national security, and criminal law.[24] The assessment also reviews public authority access to personal data, data protection rules, and the existence of effective data subject rights and judicial redress mechanisms.[25]
Crucially, the Commission continually monitors developments in third countries for changes that could impact an adequacy decision.[26] If it becomes apparent, particularly after a periodic review, that the level of protection in a third country is no longer adequate, the Commission may repeal, amend, or suspend the adequacy decision.[27]
Without adequacy status, a third country may still transfer personal data from the EU by implementing appropriate safeguards under Article 46 of the GDPR. Transfers pursuant to Article 46 require that the controller or processor of the data provide one of the appropriate safeguards listed under Article 46(2), and that enforceable data subject rights and effective legal remedies for data subjects are available.[28]
Finally, without adequacy status or appropriate safeguards, third countries may still transfer personal data subject to the derogation provision.[29] Article 49 lists a set of necessary conditions for such a transfer to take place, such as where the data subject has explicitly consented to the transfer after having been informed of the risks[30] or where the transfer is necessary for the public interest.[31]
While Articles 45, 46, and 49 do not expressly establish a hierarchy of transfer mechanisms, the wording of each Article suggests that the three mechanisms are interrelated.[32] Article 46 states that “in the absence of a [adequacy] decision” the controller or processor may transfer pursuant to the appropriate safeguard mechanism.[33] The derogation provision under Article 49 makes a similar statement, including the absence of appropriate safeguards as a requirement.[34] This language suggests that data transferors should depend first on adequacy decisions where possible, then on appropriate safeguards, and if neither apply, then they might rely on derogations.[35]
4.1 Schrems II
The guiding principle of Chapter V is to ensure that the protections afforded by the GDPR are not undermined.[36] But the precise standard a third country must guarantee is not exactly clear from the text. The Court of Justice of the European Union (CJEU) addressed this question specifically in Data Protection Commissioner v Facebook Ireland Ltd, Maximilian Schrems [Schrems II].[37] In this case, the CJEU further clarified the concept of “essential equivalency,”[38]which provides the precise standard a third country must guarantee.
The Schrems II decision, a continuation of the Court’s previous ruling in Schrems I[39] and dealing with the same facts, critically assessed the EU-US Privacy Shield.[40] Schrems II followed the Commission’s determination that the Privacy Shield provided adequate protection for adequacy under Article 45 of the GDPR. The CJEU ultimately disagreed with the Commission’s determination, declaring the Privacy Shield an invalid mechanism for transferring personal data from the EU to the US.[41]
The CJEU emphasized that US law lacked the necessary limitations and safeguards against authorized governmental interference and did not provide effective judicial protection against such intrusions.[42] The CJEU concluded that personal data processing, including via access by a third party, constitutes an interference with the fundamental rights enshrined in Articles 7 and 8 of the EU Charter, irrespective of the sensitivity of the data or inconvenience to the persons concerned.[43] While recognizing that these rights are not absolute, the CJEU noted that any limitations on them must be provided for by law, respect the essence of the fundamental right, adhere to the principle of proportionality, and meet objectives of general interest or the need to protect the rights and freedoms of others.[44] The CJEU found that certain US surveillance programs did not meet these standards, as they allowed extensive access to personal data without adequate limitations or safeguards, thus not ensuring a level of protection essentially equivalent to that guaranteed within the EU.[45]
There are two main takeaways from Schrems II. First, the CJEU was highly critical of the Commission’s 2016 assessment that the Privacy Shield provided adequate levels of protection, signalling that increased scrutiny is warranted for these assessments in the future. The second is the Court’s elaboration on “essential equivalence” as the standard for third country data transfers under Chapter V of the GDPR.
5. AIDA’s Impact on Canada’s Adequacy
Recently, the European Commission’s report to the European Parliament and Council on the review of Canada’s adequacy decision under Article 25(6) of Directive 95/46/EC concluded that Canada continues to provide an adequate level of protection for personal data transferred from the EU, extending its adequacy status.[46] The Commission noted that legislative amendments, case law, and the actions of oversight bodies, particularly under the Personal Information Protection and Electronic Documents Act (PIPEDA),[47] have enhanced data protection levels.[48]
This was welcome news for Canada and Canadian organizations that process personal information from the EU.[49] However, while reaffirming Canada’s adequacy status, the Commission expressly indicated its intent to continuously monitor the country’s legal developments in data protection.[50] This ongoing scrutiny aims to ensure that potential legislative and regulatory changes, such as those represented by Bill C-27 and AIDA, align with the EU’s evolving standards of data protection.
Given the importance to trade of maintaining Canada’s adequacy status and the EU’s requirements for data protection, AIDA’s silence on human rights likely does not align with the EU environment. This divergence is concerning in light of the requirement for essentially equivalent levels of data protection in EU adequacy decisions.
5.1 Is AIDA Essentially Equivalent?
Schrems II sheds light on the EU’s approach to determining whether a third country provides adequate data protection under the GDPR, but it is not immediately clear how this decision and the GDPR might interact with Canada’s AI regulations. Nonetheless, the decision presents a significant challenge for Canada and its adequacy status.
The development and use of AI will engage the fundamental rights under the EU Charter. Data and AI are functionally inseparable and consequently share overlapping privacy concerns. The interconnection of AI, data, and privacy is discussed in the literature at length.[51] This interconnection results from the reality that the development and use of AI require the collection and processing of massive amounts of data, including personal information.[52] Data collection itself can be intrusive to privacy, especially when collected without consent, and the intrusion can be exacerbated by the sheer scale of certain algorithmic systems and their data-collecting capabilities.[53] AI also has the potential to facilitate a wide range of human rights violations at an unprecedented scale, and to do so discreetly and arbitrarily.[54]
The CJEU has repeatedly emphasized the importance of minimum safeguards to ensure that interferences with fundamental rights are limited to those that are strictly necessary. AIDA contains no human rights safeguards. Without any consideration of the human rights impacts of AI, it is difficult to imagine how potential interferences with privacy and human rights will be limited to what is “strictly necessary.” Also, AIDA does not provide individuals with any actionable rights in relation to human rights interferences. The use of fundamental rights impact assessments in the EU’s AI Act—which restrict conduct that could impact human rights and provide for remedies—contrasts sharply with Canada’s lack of protections.
Ultimately, AIDA’s human rights gap jeopardizes Canada’s data protection adequacy status. The economic implications of losing the EU adequacy status for Canada cannot be overstated. The free flow of personal data from the EU, facilitated by adequacy status, significantly reduces compliance costs for Canadian organizations and streamlines Canada’s trade and economic relations with the EU. Any perception that Canada’s AI regulatory framework is misaligned with the EU’s protection of human rights could trigger re-evaluation, potentially leading to restrictions on data flows. This would also affect the operational capabilities of companies relying on transatlantic data transfers.
5.2 What Should Canada Do?
Given the EU AI Act’s explicit preamble on protecting fundamental freedoms in the face of AI’s potential impact on human rights, Canada could start by including a similar preamble statement. AIDA should substitute vague wording like “uphold” and “principles of international human rights law” with a clear statement that AIDA will protect human rights against the adverse impacts of AI. This gesture, while minimal, would be the starting point for alignment with the EU, and establish a statutory posture towards addressing these human rights concerns. By being direct, this amendment would contribute to the essential equivalence analysis by centering a baseline commitment to human rights considerations.
Beyond symbolism, Canada could further strengthen its adequacy status by incorporating a mechanism akin to theEU AI Act’s fundamental rights impact assessment. This would involve a systematic analysis of AI’s potential effects on fundamental rights. Implementing such an impact assessment would provide a tangible means of mitigating the risks associated with AI technologies, which is necessary to satisfy an essentially equivalent analysis. Ultimately, these steps would help reinforce Canada’s position in maintaining its adequacy status for data transfers.
6. Conclusion
The absence of explicit human rights considerations in AIDA is in tension with the EU’s integration of fundamental rights in the plain text and spirit of its AI regulatory framework, potentially risking Canada’s valuable adequacy status for data protection. The essential equivalence principle sets the threshold for data protection adequacy outside the EU and likely requires the integration of human rights safeguards within AI regulatory frameworks. AIDA’s current framework, with its limited engagement with human rights issues, likely falls short of this standard. To bridge this gap, Canada should incorporate explicit human rights provisions in AIDA. This could take the form of a preamble that directly acknowledges the human rights impacts of AI and a structured fundamental rights impact assessment mechanism.
[1] Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2022, (first reading 16 June 2022) [AIDA].
[2] Constitution Act, 1867 (UK), 30 & 31 Vict, c 3, s 92, reprinted in RSC 1985, Appendix II, No 5.
[3] EU, Regulation 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance), [2024] OJ, L 1689/1 [EU AI Act].
[4] EU, Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] OJ, L 119/1 [GDPR].
[5] On January 6th, 2025, Prime Minister Trudeau requested the prorogation of Parliament until March 24th, 2025. All parliamentary business—including the passing of AIDA—has been effectively paused until that date. Prorogation of a session typically ends all proceedings before Parliament, and bills that have not received Royal Assent prior to prorogation “die” and must be reintroduced in the next session. This will certainly affect Bill C-27 and AIDA, but the precise results remain unknown.
[6] While the federal Directive on Automated Decision-Making is an important policy document, it is only internally binding on the federal government and does not create actionable rights for individuals or organizations. Ultimately, the Directive is not law. Therefore, while the regulatory landscape is not entirely “empty,” AIDA remains Canada’s first meaningful attempt at creating AI legislation (see Treasury Board of Canada Secretariat, Directive on Automated Decision-Making (Ottawa: TBCS, 2019), online: < https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592>.
[7] Innovation, Science and Economic Development Canada, The Artificial Intelligence and Data Act (AIDA) – Companion document (Ottawa: Government of Canada, 2023), online: < https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document> [Companion Document].
[8] Theresa Scassa, “AI, Human Rights, and Canada’s Proposed AI and Data Act” (19 March 2024), online: <https://www.teresascassa.ca/index.php?option=com_k2&view=item&id=380:ai-human-rights-and-canadas-proposed-ai-and-data-act&Itemid=80>.
[9] Ibid. The initial response may be that privacy rights are dealt with under privacy legislation for the public and private sectors at federal and provincial levels of government. However, these statutes deal with data protection, and not the privacy implications of AI directly (see ibid).
[10] Ibid.
[11] Future of Life Institute, “The EU Artificial Intelligence Act: Up-to-date developments and analyses of the EU AI Act” (last visited 3 March 2025), online: <https://artificialintelligenceact.eu/>.
[12] EU AI Act, supra note 3, Preamble at para 3.
[13] Ibid, Preamble at para 5.
[14] Ibid, art 1(1).
[15] European Union Agency for Fundamental Rights, “What are fundamental rights?” (last visited 3 March 2025), online: <https://fra.europa.eu/en/content/what-are-fundamental-rights#:~:text=‘Fundamental%20rights’%20expresses%20the%20concept,is%20used%20in%20international%20la>. The Charter of Fundamental Rights of the EU and the European Convention on Human Rights, for example, are vastly similar. Traditionally, legislators in the EU use the term fundamental rights to refer to those rights specifically captured in its Constitution, and human rights to refer to the general rights inherent to all human beings, particularly in the international law context.
[16] EU AI Act, supra note 3, art 27.
[17] Ibid, Preamble at para 96, art 27.
[18] Ibid, art 27.
[19] Supra note 4.
[20] Ibid, art 44.
[21] Ibid, art 1(1).
[22] Ibid, art 1(1); Laura Drechsler & Irene Kamara, “Essential Equivalence as a Benchmark for International Data Transfers After Schrems II” in Eleni Kosta & Ronald Leenes, eds, Research Handbook on EU Data Protection Law (Cheltenham: Edward Elgar Publishing Ltd, 2022) 314 at 317.
[23] GDPR, supra note 4, art 45(2)(a).
[24] Ibid.
[25] Ibid.
[26] Ibid.
[27] Ibid.
[28] Ibid, art 46(1).
[29] Ibid, art 49.
[30] Ibid, art 49(1)(a).
[31] Ibid, art 49(1)(d).
[32] Drechsler & Kamara, supra note 22 at 324.
[33] GDPR, supra note 4, art 46.
[34] Ibid, art 49.
[35] Drechsler & Kamara, supra note 22 at 324.
[36] GDPR, supra note 4, art 44.
[37]Data Protection Commissioner v Facebook Ireland and Maximillian Schrems, C-311/18, [2020] ECLI:EU:C:2020:559 [Schrems II].
[38] Ibid at para 104.
[39] Maximillian Schrems v Data Protection Commissioner, C-362/14, [2015] ECLI:EU:C:2015:650 [Schrems I].
[40] US Department of Commerce, “EU-US Privacy Shield” (last visited 3 March 2025), online: <https://www.commerce.gov/tags/eu-us-privacy-shield#:~:text=The%20EU%2DU.S.%20Privacy%20Shield,United%20States%20in%20support%20of>.
[41] Schrems II, supra note 37 at paras 199–200.
[42] Ibid.
[43] Ibid at paras 170–71.
[44] Ibid at paras 172–74.
[45] Ibid at paras 178–85.
[46] EU, Report from the Commission to the European Parliament and the Council on the first review of the functioning of the adequacy decisions adopted pursuant to Article 25(6) of Directive 95/46/EC, [2024] at 9 [Canada Adequacy Decision].
[47] Personal Information Protection and Electronic Documents Act, SC 2000, c 5 [PIPEDA].
[48] Ibid.
[49] Note that the adequacy decision only applies to organizations that are subject to PIPEDA and processing activities that are within PIPEDA’s jurisdiction.
[50] Canada Adequacy Decision, supra note 46 at 9.
[51] See generally Irina Pencheva, Marc Esteve & Slava Jankin Mikhaylov, “Big Data and AI – A transformational shift for government: So, what next for research?” (2018) 35:1 Pub Pol’y & Admin 24.
[52] Ibid.
[53] Ibid.
[54] See e.g. Yuan Stevens et al, Face Recognition Technology for the Protection of Canada’s Parliamentary Precinct and Parliament Hill? Potential Risks and Considerations (Toronto: Toronto Metropolitan University, 2022).