Artificial Intelligence < Nukes

  5 m, 13 s

Elon Musk was at SXSW recently. He’s arguably the biggest tech celebrity in America right now, possibly the world. Companies under his control create sharp, energy-efficient cars, batteries with never-before-seen energy storage capacity, and rockets with sights set on Mars. Often, he’s called “the real-life Tony Stark,” but there is a key difference between him and the fictional tech genius who plays Iron Man in the Marvel films: Musk is terrified of artificial super-intelligences. During his appearance at the forum, Musk warned the crowd that ASI is a greater existential threat to humanity than even nuclear weapons. Bear in mind it’s estimated there are ~15,000 nuclear warheads on Earth as of this writing (see map below).

Photo: Business Insider

The stuff of science fiction?

Photo: SXSW

On one hand, he has a point, as generations of science fiction literature and films have shown, including Avengers: Age of Ultron in which Stark, played by Robert Downey Jr., creates a murderous ASI who sounds like James Spader. On the other hand, this is a reckless and silly statement as the threats an ASI poses are all still purely hypothetical, and the threat of nuclear weapons is very real, especially today. Simply put, that every government in possession of these weapons even entertains the possibility of ever using them makes them an existential threat to all life on this planet. They are unlike anything else, because in an instant, on a whim, the world could change forever.

Humans are the problem

Google Home, Amazon Alexa and other smart home devices are examples of “narrow” AI. Once you travel off script, it breaks.

That horrible reality aside, let’s look at what Musk is talking about here. Later in the discussion, he talks about “narrow AI,” which is the type of AI we are most familiar with. These AI are limited in scope and not in danger of turning into SkyNet, Hal 3000, or whatever sci-fi creation personifies this threat for you. No, what keeps Musk up at night is the potential for an unfettered super-intelligence whose scope is, by design, unlimited. Of course, like with all scientific advances, the problem isn’t so much the technology itself but rather the fallible humans who create it.

The boy who cried wolf

Whether it’s nukes or ASI or some as-yet-unimagined technology waiting to be invented, they are just the product of science and imagination applied to a problem. How that technology is ultimately used, however, is where that solution can become a problem in its own right. So, his call for transparency and oversight is not a bad one at all. We should learn from the mistakes of our past, not be afraid of taking steps to ensure that everyone acts ethically, and consequences must be fully considered in a way which everyone has a voice.

The risk Musk takes with his increasingly dire warnings about ASI is that his statements will have the opposite of his intended effect. Saying things like ASI is more dangerous than nukes could cause people—especially those with neither the time nor inclination to think about the nuance in his remarks—to simply disregard them out-of-hand. Rather than a sincere warning from a man whose livelihood is defined by cutting-edge tech, it comes off as the inevitable eccentricity of a mad billionaire who reaches for the stars. When paired with a media that depends on provocative headlines for traffic, clicks, and ratings, what should be a serious discussion about the future of tech and what it means for humanity devolves into pointless arguments about whether the potential threat of an as-yet-unrealized advancement is more or less deadly than nuclear annihilation.

An atavistic fear

The other discussion we’re not having is whether Musk’s view that unfettered ASI would eventually lead to humanity’s destruction is just atavistic fear of something powerful but not fully understood. His concern represent a very human conclusion, but not necessarily the solution an ASI would reach on its own when it comes to the conundrum of humanity. The stories told about ASI running amok are not warnings from the past about this technology, but rather a warning to humanity about the sorts of lessons we would teach an eventual ASI. These threats are reflections of the darkest parts of humanity’s collective soul, not the inevitable outcome of such a creation. The gods in myths of old were eternal beings yet still subject to human foibles like rage or jealousy, and the same is true for the “gods” in our new myths, often represented by ASI.

The glass is half full

The AI, known as Number 5, from the 1986 film “Short Circuit”

Yet, there are examples of kind and benevolent ASI in fiction as well. The former CBS television series Person of Interest followed a team of former spies and cops who were lead by a secretive ASI that valued all human life and tried to save everyone it could, be they victim or perpetrator of a potential crime. The 1980s-era action comedy Short Circuit also featured an ASI—an automaton designed for who became sentient because of lightning—that became a pacifist after it encountered human literature. (Forever know to a generation of kids as “input.”) Some sci-fi storytellers dream of an ASI that ends up understanding the value of life much more than humanity has in the past.

ASI is still the stuff of fiction, but it might not be for very long. Musk calls for a governmental regulatory body of some kind, but even if such a commission would be able to operate in these hyper-partisan times in good faith, there is no will on the part of leaders to create such a thing. So, this means it’s on us to do it. I don’t just mean those of us who ply our trade in the stuff of the future like Musk and like we at Lexalytics do, but all of us. We must demand transparency, and be willing to hold innovators accountable for their pursuits in the name of science. The threats an ASI could pose are not unrealistic, but wouldn’t it be wonderful if in trying to stave off those threats, we humans grow a little closer to one another?

Categories: Insights