Elon Musk vs Harvard, AI Alarmist vs AI Realist

  4 m, 3 s

Welcome to Weekly AI News & Insights from Lexalytics, a curated selection of articles and interest pieces brought to you by the leaders in “words-first” artificial intelligence. This week: it’s Elon Musk vs Harvard, as they spar over the dangers of narrow and general AI; a Google executive warns about “movie-inspired death scenarios” from Terminator-style AI; and 3 tips on how businesses can avoid dangerous AI.

 boxing-gloves.pngElon Musk vs Harvard, Round 2: Musk fires back

The opinion war between AI Alarmists and Realists rages on. Last week, Harvard professor Steven Pinker went on a Wired podcast to criticize Elon Musk’s attitude towards AI (and his cars, too). Speaking to Wired, Pinker pointed to Musk’s own Tesla Motors as proof that Elon isn’t serious about the AI threat.

“If,” Pinker said, “Elon Musk was really serious about the AI threat he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.”

A few days later, Musk fired back on Twitter.

<Musk vs Harvard.png><noscript><img src=

Remember, we’ve discussed Elon Musk’s AI alarmist rhetoric before. And our own CEO, Jeff Catlin, is in the AI Realist camp. In fact, Jeff says that AI won’t destroy us, and Tesla is proof. (Fun fact: Apple co-founder Steve Wozniak is an AI realist, too.) But in this case, we can see where Musk is coming from.

Narrow/targeted AI (like self-driving cars and natural language machine learning) is one thing. General AI (like the Cylons in Battlestar Galactica, Terminator’s Skynet, or the prescient 1999 Disney Channel Original Movie Smart House) is very different. So it’s fair for Musk to warn against the dangers of general AI, even as he invests in self-driving cars. And in “Elon Musk vs Harvard, Round 2” – we’ll give this point to Musk.

Now, with all that said, you should do your research and form your own opinions. Start with this Fortune piece. It offers some useful context about narrow AI versus general AI.

Read about Musk and Pinker’s Twitter spat on Fortune.com

Google exec: Terminator-style AI scenarios are coming

Tech predictions don’t get much more unnerving than this. Eric Schmidt, former CEO at Google and Alphabet, says that, “movie-inspired death scenarios… are one to two decades away.”

First, let’s take a step back. Schmidt made this prediction at February’s Munich Security Conference. Here’s his full comment in context:

“Everyone immediately then wants to talk about all the movie-inspired death scenarios, and I can confidently predict to you that they are one to two decades away. So let’s worry about them, but let’s worry about them in a while.” – Eric Schmidt

But in the same interview, Schmidt clarified:

“You’ve been watching too many movies. Let me be clear: Humans will remain in charge of [AI] for the rest of time.” – Also Eric Schmidt

So, maybe not quite so dire a prediction as it appears at first glance. As DefenseNews points out, “For Schmidt, the benefits AI brings to healthcare and energy outweigh concerns of an apocalyptic robot takeover.” But Schmidt’s comments echo the warnings of Elon Musk and many other prominent AI alarmists. And Schmidt has previously discussed his concerns over China’s AI military aspirations.

Confused yet? We are, too. In the opinion war between AI Alarmists and Realists, the nuances of real-world technology often get muddled.

See the rest of Schmidt’s comments on DefenseNews

3 tips for avoiding dangerous AI

If Elon Musk and Erich Shmidt have gotten you scared about AI, Nicholas Fearn is here to offer some timely advice.

Writing for IT Pro, Fearn offers some pointers on how to use AI responsibly: don’t bother looking for a simple answer, prioritize system security and data safety, and push for responsible industry regulations.

For one, Fearn points out that there can’t be one simple answer to AI safety, because AI is not a simple matter. In fact, there can never be a “one-size fits all” AI solution, because the strength of AI lies in targeted solutions to specific problems. By taking steps to ensure your data is secure and reliable, you can reduce your risk of AI safety problems.

Moreover, many AI fears revolve around issues of data security. For example, Fearn cites two security researchers who showed how they could hack into and take control of certain cars, even at highway speeds. Data breaches can be disastrous, especially when AI is involved.

Learn more about how to avoid dangerous AI on IT Pro

Weekly AI News & Insights

Stay tuned to the blog for more weekly AI news and insights articles, interest pieces and thought leadership. Got an opinion on the Elon Musk vs Harvard battle, or the opinion war between AI Alarmists and AI Realists? Leave a comment below!

In the meantime, why not learn how Lexalytics can transform your text data into insights and money? Drop by our website for more info.

Categories: Newsletter