Although Canada does not yet have comprehensive legislation governing Artificial Intelligence (“AI”), employers are increasingly exploring AI-driven processes, recognizing that the efficiency gains associated with AI are no longer in serious dispute. At the same time, new statutory obligations for employers using AI in recruitment in Ontario will take effect on January 1st. In light of this evolving landscape, this alert highlights key considerations for organizations contemplating the use of AI in recruitment within Canadian workplaces.
Human Rights Considerations
Although employers may introduce AI in recruitment with the intent to relieve human resource pressures, reduce the cost and time it takes to hire, improve their ability to identify the best candidates, and eliminate bias and discrimination from their process – it is crucial to assess the AI system itself for compliance with human rights obligations. Furthermore, existing and emerging AI regulations are increasingly requiring organizations to conduct AI impact assessments and/or comply with human rights laws when deploying AI systems.
Bias & Errors
Bias and discrimination in AI can be easy to overlook and, if left unchecked, AI can cause deep and longstanding harm to individuals, communities and organizations. Bias and discrimination can also present economic, legal and public relations consequences for organizations.
In a 2024 report, the Ontario Human Rights Commission (the “Commission”), cited these examples of unintended potential human rights violations including the following:
- Interview technologies may not be as reliable for assessing applicants with speech impediments, require a screen reader, or for those who have a different first language. Technologies that can be used to analyze applicants’ emotional expressions are more likely to incorrectly assign negative emotions to Black faces than White faces.
- AI technologies have also been found to use personal information, such as names, postal codes and gaps in employment history to make inferences on applicants’ race, disability, age and other Code grounds. Hiring decisions are then made based on proxy data that may be discriminatory.
Assessing your AI System
Although any assessment of human rights in AI is a multi-faceted process that requires integrated expertise, the Law Commission of Ontario and the Commission have introduced a human rights AI impact assessment (“HRIA”) tool to assist organizations in assessing AI systems for compliance with human rights obligations. The tool is relevant and applicable to any organization, public or private, intending to design, implement, or rely on any algorithm, automated decision-making system or artificial intelligence system. In particular, the purpose of the tool is to:
- Strengthen knowledge and understanding of human rights impacts;
- Provide practical guidance on specific human rights impacts, particularly in relation to non-discrimination and equality of treatment; and
- Identify practical mitigation strategies and remedies to address bias and discrimination from AI systems.
Assessing for bias and discrimination is not a simple task. As such, it should not be an afterthought or minor consideration but be integrated into every stage of design, development and implementation of AI.
Final Thought
For more information, you can access the HRIA here and find more information specific to the upcoming changes in Ontario in our recent Alert found here. e2r® Members should also be sure to join us next week for our e-learning as we delve more into this topic!
There are many important human rights and legal issues that can arise with AI. Removing or reducing bias does not necessarily resolve other issues such as surveillance, privacy, data accuracy, procedural fairness, etc. Seek expert advice where appropriate.
If you would like to discuss the any of the above or need any other assistance please don’t hesitate to reach out to speak to an e2r® Advisor.