When Is “Good” Good Enough for AI?

  2 m, 50 s

Popular media and hype cycles are leading us to expect radical changes and perfect results from artificial intelligence. But you don’t read perfectly, so why should your AI? Let’s explore when “good” is good enough for AI.

You don’t read perfectly, so why should your AI?

We have high expectations of AI. If it doesn’t spit out a flawless output, it’s not good enough. But are we being unreasonable in our expectations?

Sure, it’s not great when a computer mistakes a 3D-printed turtle for a rifle. And we get that you might be nervous about the defensive driving skills of your self-driving car.

But when the stakes are lower—say extracting the text from an article and assessing sentiment—how perfect does an AI truly need to be?

What does “good enough for AI” mean?

When dealing with data, we’re also dealing with imprecision and incompleteness. There’s a point where we have to agree to compromise so that things actually get done. Analysis paralysis is real, and it’s expensive.

Rather than aiming for perfection, we should be aiming for good enough. Obviously tolerances for what’s “good enough” will vary across domains and projects. “Good enough” for a flight path is probably a bit more stringent than “good enough” for reading house numbers.

At Lexalytics we work with text analytics. We extract text data and analyze it to glean sentiment at various levels—the word level, the sentence level, the discourse level and the cross-document level.

Getting it right matters, but does getting it perfect? Often the gap between right and perfect is small enough to be of vanishing importance.

Perfection is a measure of time and resources

Say our extracted text contains a typo, or it’s pulled in some extraneous meta-data. Maybe a whole sentence is missing.

As a human reader you’ll still get the gist of what’s going on. As humans we’re used to creating “good enough” understanding from “good enough” data. We skim, we condense and we weight information based on our goals. A painstaking, comprehensive reading takes time and resources, so we assign them accordingly.

The same is true of AI. We could spend the time and resources on solving the problem perfectly rather than just solving it. But frankly we think that those resources could be better used elsewhere—like on crunching more data.

Mo’ data means mo’ problems (for us to solve)

AI feeds on data. The more data it has access to, the better and more precise it gets. The reason that AI is exploding the way it is has a lot to do with the fact that we’re living in the data age.

Billions of searches are conducted daily. Billions of social media posts, millions of news articles and countless media are being uploaded within the same window. It’s a big data wonderland out there.

What do you think is better for your AI? Being perfectly trained on a carefully curated subset of data, or being able to graze across limitless pastures?

Making sure that your AI has perfect input or flawless output isn’t the goal. The goal is to be good enough to solve your problem. Let’s focus on getting it right, rather than getting it perfect.

And besides, with enough data input, “good enough for AI” will eventually approach perfect anyway.

Categories: Artificial Intelligence