Bias in AI Should Make You Think

  2 m, 18 s

Welcome to Weekly AI News & Insights from Lexalytics, a curated selection of articles and interest pieces brought to you by the leaders in “words-first” artificial intelligence. This week: new cases of racial and gender bias in AI facial recognition should make you think; experts say humans may need to merge with AI to survive; and exploring the role of AI in ethical decision making.

Racial and gender bias in AI facial recognition

Researchers at MIT and Stanford have found extreme gender and skin color biases in three different facial analysis programs. While one company claimed 97 percent accuracy for system, the researchers’ findings are very different. “For darker-skinned women… the error rates were 20.8 percent, 34.5 percent, and 34.7,” they write.

And that’s just the tip of iceberg here. “The data sets used to train these systems were more than 77 percent male, and more than 83 percent white,” Engadget reports. “This narrow test base results in a higher error rate for anyone who isn’t white or male.”

This kind of bias in AI facial recognition is a huge problem. The potential for harm is staggering. Thankfully, this issue has been garnering more media coverage in recent months.

Learn more on Engadget.

 Should you prepare to merge with AI?

Futurism is the exploration of technological trends. Lately, futurists have split into two factions: pro-AI and AI alarmists. The pro-AI group sees huge benefits from ethical applications of targeted artificial intelligence. AI alarmists focus on the potential of AI making humans redundant. Now, two prominent AI alarmists, including Elon Musk, are sounding another alarm.

In short: They warn that people may eventually need to “merge” with computers, in order to survive. Now, to be honest, this writer finds it all a bit overdone. But the alarmist perspective is certainly interesting.

Read the article on CNBC.

The role of AI in ethical decision making

“AI isn’t an arbiter of ethics, and it’s not going to be in the foreseeable future,” writes Jeff Catlin on Forbes. “But what AI can do in the ethics space is support and augment our own decision making — and act as an alarm when we’re getting it wrong. Get it right, though? That’s on us.”

These words seem especially prudent in light of the above cases of AI alarmism and mounting cases of AI bias due to skewed data sets and poorly-implemented training methods. In this article, Jeff explores the potential, and pitfalls, of using AI to influence your decision making.

Read Jeff’s thoughts on Forbes.

Weekly AI News & Insights

Have some thoughts on the impact of bias in AI facial recognition? Leave a comment below! Curious how AI and machine learning can supercharge your data analytics? Drop by our website to learn more about our “words-first” AI.

Categories: Newsletter