Artificial Intelligence and Human Beings

  2 m, 26 s

Science-fiction is a confused genre of storytelling, but in the best possible way. The authors of these amazing stories are often both simultaneously curious about the future and terrified of how it can all go wrong. For example, in the summer blockbuster Avengers: The Age of Ultron, one character’s scientific curiosity leads to the creation of the titular villain. In fact, many stories* deal with how scientific curiosity brings about the apocalypse. Perhaps this is why Elon Musk has been so cautious about artificial intelligence.

In July, Musk and a thousand other luminaries like Stephen Hawking and Steve Wozniak, signed an open letter to ban the use and development of artificially-intelligent, autonomous weapons. In December of this year, Elon Musk and others have created the nonprofit group OpenAI, backing the group with one billion dollars to study the ways artificial intelligence can best benefit humanity, absent the need to be profitable or please investors.

Some blogs noted that this seems incongruous, studying and developing artificial intelligence after just cautioning the world against its misuse. Almost always, in these and other such critiques, the examples in fiction from Asimov to James Cameron’s Terminator inevitably come up.

Here at Lexalytics, we know a thing or two about artificial intelligence, and it’s nothing to be “afraid” of, and pretty much all sci-fi writers agree with us. The warning they are trying to send is not about science or technology, but us.

Any technology, from the printing press onward, has the potential to be abused. In some cases, that abuse is more dangerous than others, and we almost always never realize how bad it was until it’s already happened. This doesn’t mean we should avoid creating, but just the opposite. Musk and the others have it right: if you’re afraid of something, learn more about it.

When people think of artificial intelligence, they often think of characters like Ultron or SkyNet or HAL 9000. These conscious but amoral machines are not what real-world AI looks like. Instead of them, think C3PO. The droids of Star Wars are autonomous beings, but beholden to their programming. 3PO could no more pick up a gun, sorry “blaster,” and start shooting than could your smartphone.

We are surrounded by AI in our technological lives. From the content you see when browsing on smartphone apps to the very phone itself. It’s the difference between the tools in your toolbox and a 3D printer that can build structures.

Any scientific advancements we make, including AI, are not to be feared. However, they are to be watched and understood, so that people who try to abuse them can be spotted and terminated…er, stopped. Weird, it’s like the word-processor just typed that itself….

* If you like stories that deal with how scientific curiosity and innovation lead to the apocalypse written by Joss Whedon, the two-season series Dollhouse is for you! – Editor

Categories: Analysis, Natural Language Processing