AMAZON has scrapped a recruitment tool that used artificial intelligence to grade job applicants, after it emerged that it was biased against women.
The tool learned to favour men over women after being trained on past CVs written mostly by male candidates.
AP:Associated Press Alexa is an example of a more successful AI built and used by Amazon
According to sources speaking to Reuters, Amazon started using the AI-based hiring software in 2014.
The retailing and tech giant wanted to streamline the hiring process for software development and technical roles, with its AI system quickly giving applicants a rating out of five.
"Everyone wanted this holy grail,” one of the anonymous sources said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those."
However, it was in 2015 that human recruiters realised that it unfairly marked down female applicants, penalising them for including such words as 'women's' in their applications (as in 'member of the women's hockey team').
Alamy Despite not showing bias itself, Alexa has been accused by some critics of perpetuating female gender stereotypes
As a result, the company ditched the AI-based recruiting tool at the beginning of 2017 according to Reuters, although the Sun understands that it was discontinued in 2015.
Its anti-woman bias developed because of the way artificial intelligence and machine learning works.
Through machine learning, it was taught how to assess applications by being fed past CVs submitted over a ten-year period.
The vast majority of these CVs were from male candidates (due to the tech industry's historical dominance by men), so the AI's idea of an 'ideal applicant' was excessively defined by male-associated characteristics.
The anonymous sources speaking to Reuters stated that Amazon's recruiters never based recruitment decisions on the tool's ratings alone, although they did look at recommendations when sorting through applications.
Amazon isn't the only company to run into trouble when using AI and machine learning in the context of recruitment.
In September, Facebook came under fire when it emerged that its algorithms had been preventing women from seeing certain job advertisements.
And beyond the world of recruitment, a 2016 study found that algorithms used in the US to predict future offending rates among convicts was biased against African Americans.
Do you trust AI-based algorithms to make reliable decisions? Let us know in the comments!