AI can become a massive discrimination engine. What is YOUR responsibility as a developer?
Renato Byrro
Posted on October 1, 2019
All right, here's the deal:
AI is being used - reportedly for the first time - to screen job applicants in the UK and worldwide.
Is it just myself or does it smell really, really bad to you too? Please share your thoughts in the comments.
Let's discuss:
What do you think is your role, as a developer, in preventing AI from being used as a massive human discrimination engine based only on external traits?
Note: You/your company might not be working with AI now. But rest assured, you won't be able to avoid it for too long.
Reflections on the article for background:
The company behind the AI "... claims it enables hiring firms to interview more candidates ... and provides a more reliable and objective indicator of future performance free of human bias".
Ok.
Wow, wow, wow, wait. "Free of human bias"?
Last time I checked, humans were building AI models. And training samples!
"The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of ... information compiled from previous interviews of those who have gone on to prove to be good at the job".
console.log("wow, ".repeat(10));
console.log("wait, ".repeat(99));
Who gets to judge whoever has "proved to be good" or not? One thing is assessing an individual's performance. If the evaluator makes a mistake (or is biased), he'll damage that individual.
This is already bad enough, but the scale is one individual.
Now, who decided it would be a good idea to scale that to an entire population?
Please share your thoughts in the comments.
Cover image by Chaozzy Lin on Unsplash
Posted on October 1, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
October 1, 2019