In response to this tweet:
NEW: the insurance startup Lemonade claimed it was analyzing “non-verbal cues” like eye movements and speech patterns to reject insurance claims.
then the company deleted a bunch of tweets, and now it’s saying “we def do not do phrenology”https://twitter.com/janusrose/status/1397602847064215554
This is part of a trend of problems with companies who use AI to make decisions about people. Now, I’m not involved in AI ethics, for that you should follow Timnit Gebru, but my work in career development involves understanding how people assess other people.
A problem in hiring is the role bias plays. If you ask 100 people if bias plays a role in their decisions about job candidates you’ll probably have 90 saying no. That’s actually one of the problems. Our brains do this wonderful thing where they say “I’ve seen that behaviour before and it meant they were lying”. But the problem is that the experiences your brain is assessing against are based in the culture you are within. The things it assesses are what people within your culture do when they’re lying. That may be different for those raised in other cultures. But your brain tells you that your experience is universal. And that’s not just about lying. It does the same for what means attentive, friendly, pleasant, combative, and dedicated. That means that when we’re interviewing someone for a job and we think about “fit” it’s very very easy to favour those raised in your own culture.
So let’s talk AI. The cues that AI is told mean certain things are also culturally conditioned. Usually from how the system was trained. The problem is that AI can’t critically assess itself and say “wait, is that true or just what my upbringing says?”
Now there are many employers who don’t critically assess their biases, and that’s a problem. But transferring those biases to AI and then claiming it’s unbiased because it’s AI is much much worse. So that’s where we are. AI remains subject to the garbage in garbage out problem, so pretending that it’s unbiased is untrue. What AI does is apply the same biases to everyone. That’s wildly different from being unbiased.