How is an AI like a car? Both can function for a while on their own, but neglect them for too long and you’re in trouble. Just like a car, AI requires maintenance. This week’s AI News & Insights explores the importance of regular AI maintenance and examines some stories of AI failure.
Plan for failure; work on your reaction times; adopt a change management model. Manifesto of a management consulting firm? No, it’s Paul Barba’s latest feature in KDnuggets.
As cars become more complex, insurance companies are advising owners to keep up with preventative maintenance before the cost of repairs becomes staggering. Similarly, as an AI grows more complex, the risks and costs of AI failure grow larger.
“Through auditing, quantitative measuring and proactive organizational responsiveness, you can avoid the equivalent of blowing an AI gasket.” – Paul Barba
Just like your car, an AI system requires maintenance to remain robust and valuable. And just like your car, you may be faced with a sudden, catastrophic failure if you don’t keep it up-to-date.
In this article, Paul explains how data scientists can avoid AI failure by maintaining it with new training data, methods and models.
Of course, neglecting your AI maintenance is just one path to AI failure. Writing on Medium, Francesco Gadaleta, Chief Data Officer at Abe.ai, explores 9 more “creative ways to make your AI startup fail“.
Francesco’s list is comprehensive, funny, and thought-provoking. It features some classic paths to failure, such as “Cut R&D to save money” and “Work without a clear vision”. But, Francesco says, “there is a plethora of ways to fail with AI”.
My favorite is #2, “Operate in a technology bubble.”
As Francesco points out, AI doesn’t always fail due to technical problems. Sometimes, the problem is a lack of social need or interest.
“Artificial intelligence technologies cannot be built in isolation from the social circumstances that make them necessary,” Francesco writes.
This is a fantastic point. In the rush to stay ahead of the technology curve, companies often fail to consider the impact of their inherent biases. This is particularly dangerous for companies working in data analytics for healthcare, biotechnology, financial services and law.
“Operating in a bubble and ignoring the current needs of society is a sure path to failure.” – Francesco Gadaleta
Francesco’s list is a must-read for any executive, developer or data scientist looking to add AI to their technology stack.
5 stories of AI failure in 2017
Back in 2016, Facebook’s Tay chatbot quickly became corrupted and horrifyingly racist. A year later, 2017 saw more than its share of AI failures.
In this feature, Analytics India Magazine summarizes the “top 5 AI failures from 2017“. Srishti Deoras argues that these failures suggest companies should be more cautious and diligent when implementing AI systems.
In one case, Facebook had to shut down their “Bob” and “Alice” chatbots after the bots started talking in their own language. And that’s just the beginning. Srishti continues with more examples from Mitra, Uber, Apple and Amazon.
Together, these 5 AI failures cover: chatbots, political gaffs, autonomous driving accidents, facial recognition mixups, and angry neighbors.
How can an AI cause a political gaff? Read Srishti’s article to find out.
Stay tuned to our blog for more Weekly AI News and Insights, interest pieces and thought leadership.
Leave a Reply