It’s round 2 of “Elon Musk vs AI” as Musk hits back at Harvard professor Steven Pinker as they spar over the dangers of narrow and general AI. Meanwhile, a Google exec warns about “movie-inspired death scenarios” from Terminator-style AI. And an article on IT Pro offers a few more tips on how businesses can avoid dangerous AI.
Elon Musk vs AI, Round 2: Musk fires back at Harvard
The opinion war between AI Alarmists and AI Realists rages on. Last week, Harvard professor Steven Pinker went on a Wired podcast to criticize Elon Musk’s attitude towards artificial intelligence (and his cars, too). Speaking to Wired, Pinker pointed to Musk’s own Tesla Motors as proof that Elon isn’t serious about the potential for AI to cause real harm.
“If,” Pinker said, “Elon Musk was really serious about the AI threat he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.”
A few days later, Musk fired back on Twitter.
Remember, we’ve discussed Elon Musk’s AI alarmist rhetoric before. And our own CEO, Jeff Catlin, is in the AI Realist camp. In fact, Jeff says that AI won’t destroy us, and Tesla is proof. (Fun fact: Apple co-founder Steve Wozniak is an AI realist, too.) But in this case, we can see where Musk is coming from.
Narrow/targeted AI (like self-driving cars and natural language machine learning) is one thing. General AI (like the Cylons in Battlestar Galactica, Terminator’s Skynet, or the prescient 1999 Disney Channel Original Movie Smart House) is very different. So it’s fair for Musk to warn against the dangers of general AI, even as he invests in self-driving cars. And in “Elon Musk vs Harvard, Round 2” – we’ll give this point to Musk.
Now, with all that said, you should do your research and form your own opinions. Start with this Fortune piece. It offers some useful context about narrow AI versus general AI.
Google exec: Terminator-style AI scenarios are coming
Tech predictions don’t get much more unnerving than this. Eric Schmidt, former CEO at Google and Alphabet, says that, “movie-inspired death scenarios… are one to two decades away.”
First, let’s take a step back. Schmidt made this prediction at February’s Munich Security Conference. Here’s his full comment in context:
“Everyone immediately then wants to talk about all the movie-inspired death scenarios, and I can confidently predict to you that they are one to two decades away. So let’s worry about them, but let’s worry about them in a while.” – Eric Schmidt
But in the same interview, Schmidt clarified:
“You’ve been watching too many movies. Let me be clear: Humans will remain in charge of [AI] for the rest of time.” – Also Eric Schmidt
So, maybe not quite so dire a prediction as it appears at first glance. As DefenseNews points out, “For Schmidt, the benefits AI brings to healthcare and energy outweigh concerns of an apocalyptic robot takeover.” But Schmidt’s comments echo the warnings of Elon Musk and many other prominent AI alarmists. And Schmidt has previously discussed his concerns over China’s AI military aspirations.
Confused yet? We are, too. In the opinion war between AI Alarmists and Realists, the nuances of real-world technology often get muddled.
3 tips for avoiding dangerous AI
If Elon Musk and Erich Shmidt have gotten you scared about AI, Nicholas Fearn is here to offer some timely advice.
Writing for IT Pro, Fearn offers some pointers on how to use AI responsibly: don’t bother looking for a simple answer, prioritize system security and data safety, and push for responsible industry regulations.
For one, Fearn points out that there can’t be one simple answer to AI safety, because AI is not a simple matter. In fact, there can never be a “one-size fits all” AI solution, because the strength of AI lies in targeted solutions to specific problems. By taking steps to ensure your data is secure and reliable, you can reduce your risk of AI safety problems.
Moreover, many AI fears revolve around issues of data security. For example, Fearn cites two security researchers who showed how they could hack into and take control of certain cars, even at highway speeds. Data breaches can be disastrous, especially when AI is involved.
Got an opinion on the Elon Musk vs Harvard battle, or the opinion war between AI Alarmists and AI Realists? Leave a comment or tweet us @Lexalytics!