Tay, the Teen Chatbot and Redmore’s Razor

  5 m, 11 s

When Microsoft launched an “artificial intelligence” chatbot, or Tay, with the personality of a teenage girl, on a number of social media platforms, we were very curious to see how this was going to go. First off, have you LIVED with a teenager? I’m living with one right now, and I’m pretty sure that’s not the age range I would have picked to represent my company. Then we watched, with a mix of two parts horror to one part amusement, as the teen chatbot that was excited about meeting humans and National Puppy Day was trained by Internet trolls to say utterly reprehensible and offensive things. For example, asserting that British comedian Ricky Gervais had been a contemporary acolyte of the Third Reich, learning totalitarianism from Adolf himself:

Tay bot shares her opinion on British comedian Ricky Gervais.As such, I hesitate to continue using the “AI” term, and I’ll get back to why later in this article. For now, we’re just going to call it a “chatbot.”

The chatbot, named Tay, was the target of an irritation of trolls from 4chan—an anonymous image-sharing forum that Obi Wan Kenobi would call a wretched hive of scum and villainy—and their entire goal was to be racist, sexist, and offensive.

What makes it amusing, from our point of view, is that this was such an utterly predictable problem. In fact, it’s so predictable the conspiracy theorist in me thinks this could all be a stunt. What if Microsoft already had the fix to this? I mean, seriously, how hard is it to put in a simple list of “stop words” like, say, “Nazi?” At least make people work for it if they’re going to try to go all Godwin’s Law on your impressionable teen chatbot. (I mean seriously – have you HEARD TEENAGERS TALK? When they don’t think adults are listening? Is anyone really surprised? Maybe this was a better representation of a teen than we’re giving them credit for. /rant)

Chatbots and the like are machine learning based systems. Machine learning is entirely based on finding patterns and exploiting them to make predictions and assertions. Sometimes those predictions, when viewed through a human lens, are patterns that are going to be offensive. To bring up the distant past, Walmart ran into a machine learning/pattern matching problem years ago. Unfortunately, their algorithm made some suggestions that were, when viewed as a human, super-racist.

The Walmart screw-up was deemed “human error,” and I think that’s exactly the wrong way to look at it. Machine learning systems have to be booted up somehow – this is called the “cold-start problem” – where they learn from zero on the fly. Cold-started systems are particularly vulnerable to early abuse, where the models get trained via overwhelming malicious content to accept certain inputs as being valid.

This is where I apologize to teenagers, particularly my son. Teens aren’t “cold-started” at 15. They have a decade and a half of hopefully good parenting and peer-groups. This isn’t necessarily always the case, but I know that my son understands that the Holocaust happened, and that using the “n-word” is going to turn out badly (and why). The errors here aren’t “human error” – they are “lack of human” error. And this is why I am reluctant to assign such a lofty phrase as AI to a chatbot that doesn’t carry the necessary context to prevent even the most basic of attacks.

So, what happened with Tay is “human error” too, because the humans responsible for programming her should have anticipated (even without deliberate targeting) that Tay would encounter that dark side of human nature. Auto-moderation is a thing already.

There’s no reason it should be this easy for a machine learning system to hurt humans, be it through racist aspersion or laser beam eyes. Just taking what we’ve done for our natural language machine learning systems, we’ve processed billions of words of content to understand semantics, syntax, and context. Understanding hate speech and profanity is a relatively easy problem, and I’m surprised that some basics weren’t built in from the start. This is not their first rodeo, either. (So, apologies if I’m minimizing some of the thought that went into this.) However, Tay has been brought back up and shut back down at least once since the initial launch, so these vulnerabilities persist.

Volumes of amazing science fiction has been written about artificial intelligences, usually involving the AI making a moral judgment about humanity. Yet, real-world “AI”, from the models that Walmart used to promote videos on its website in 2006 to Tay in 2016, doesn’t yet make moral judgments. Tay was programmed to learn about humanity through communication and then to emulate what she learned. In that respect, she worked perfectly.

I believe we need to give machine-learning systems a bit of a break here. The basic nature of the technology means that there will be patterns unearthed that we can’t perfectly predict. It is always interesting to see “sinister motive” assigned to these problems – like the whole controversy around Siri and reproductive health. I’m a really big fan of Hanlon’s Razor – basically “don’t assign to malice that which can be adequately explained by stupidity.” I think that there is a corollary here, which I’m going to call Redmore’s Razor: “For a machine learning system, don’t assign to malice that which can be adequately explained by a deficiency in the training set.” Note: There’s a whole ‘nother post about unconscious bias and how it affects the training and behavior of these systems, but let’s put that off for a different time.

In other words, do I think Walmart was expressing latent racism? No. Do I think that Apple is misogynist? No. Do I think that Microsoft is a bunch of haters? Nope. I think that each of these cases is easily addressed by providing more information and context.

Each of these cases is about understanding more of the real world, with all of its mess and complexity.

That’s where AI lives – knowing more, not less.

Check out our web demo to see how text analytics can save your chatbot

Categories: Text Analytics