Using AI to Unlock the Brain

  2 m, 49 s

This week in AI News & Insights: cognitive neuroscientists are using AI to unlock the brain and understand how it works; Gary Kasparov says AI will make us more human; and how to avoid “analysis paralysis” with your AI.

 [using AI to unlock the brain]Using AI to unlock the brain

What drives the best-performing AI image recognition systems? Neural networks.

The idea of modeling a computer system after the human brain was first proposed back in 1944, and experienced resurgences in the late 1960s and 1980s. Only over the last decade, however, have major technology advances (such as using GPUs for machine learning) allowed researchers to push the bounds of what “neural nets” can do. This method of deep learning still has its limits, but the barriers continue to fall.

Now, cognitive neuroscientists are using AI to unlock the brain. Their goal? To learn how the human brain forms associations, by studying artificial brains made up of complex neural networks.

These “mini-brains… can be studied, changed, evaluated, compared against responses given by human neural networks,” says Aude Oliva of the Massachusetts Institute of Technology, “so the cognitive neuroscientists have some sort of sketch of how a real brain may function.”

In one project, Oliva’s team taught an AI to recognize 350 places, such as living room, park and bedroom. And the AI didn’t stop there. It even learned to recognize people and animals within each environment.

Oliva presented her team’s work at the recent Cognitive Neuroscience Society meet-up, and the Economic Times has a great summary.

[AI will make us more human]AI will make us more human

Garry Kasparov is best-known as a chess grandmaster and former world champion. But he also has some interesting insights into AI and automation. Surprised?

Automation will force us to focus on what we do better than computers, Kasparov says. And as robots increasingly take over repetitive physical and cognitive actions, humans will be relegated to creative and imaginative tasks. In fact, Kasparov believes that “new jobs will be built around compensating for AI’s creative shortcomings”.

We’ve written at length on this blog about overblown fears of “Automation Armageddon”. And we’ve explored many competing outlooks on AI automation. But Alasdair Wilkins’ new interview with Kasparov for Inverse.com offers a different, fascinating perspective.

(Also, shout-out to the graphic designer who makes the interview pop.)

Read Wilkins’ article and interview with Kasparov on Inverse.com

[how to avoid analysis paralysis with AI]

How to avoid analysis paralysis with AI

Lexalytics CIO Carl Lambrecht is a font of (somewhat sardonic) wisdom. In this article, Carl asks, “are we being unreasonable in our expectations” of AI? When is “good enough” good enough?

The answer is… complicated. Obviously, tolerances vary between industries and applications of AI. “Good enough for a flight path is probably a bit more stringent,” Carl points out. On the other hand, “good enough” for reading house numbers can be a different standard.

At Lexalytics, we believe that perfecting your AI is wasteful. Instead, we help our clients avoid “analysis paralysis” by finding where “good enough” is good enough for them. Our focus is on getting it good enough to solve your problem, without wasting your time or your money.

Learn how to avoid analysis paralysis with AI

More Weekly AI News & Insights

Stay tuned to our blog for more weekly AI news and insights articles, interest pieces and thought leadership.

In the meantime, learn how the Lexalytics Intelligence Platform draws insights from your unstructured text data.

Categories: Newsletter