AI hiring systems may be enabling discrimination against marginalised jobseekers

AI hiring systems are becoming the new norm that companies are using as part of their recruitment processes. Whilst this proves time and cost efficient for employers, it may be contributing to discrimination against applicants from marginalised groups.
New research has revealed that the algorithms used to screen and assess applicants could be influenced by factors such as how well someone may speak English, facial analysis techniques and misrepresentative data embedded within the algorithm or used by the organisation to train the model.
One notable example from the research referenced an AI system developed by Amazon that learned to downgrade the applications of people that had the word ‘women’s’ within their CV.
Whilst Australia has strong anti-discrimination laws in place, there currently aren’t any laws that regulate the use of AI and how companies use them for their recruitment processes. With an estimated 30% of Australian organisations using AI as part of this process, there are calls for the government to review these laws and implement specific AI legislation similar to that of the European Union.