We’re no stranger to pithy quips about our evil robot overlords. People are quick to point out the risks of a world managed by a technology that is everything from coldly uninterested to actively malign. I personally believe that neither of those cases are likely – I think that strong AI is going to be messy and biased and anything but super. I mean, look at who’s making it.
But while the hypothetical, far-future HAL 9000’s have claimed a special place in our nightmares, the real dangers are far more immediate and much closer to home. There are plenty of bad actors here in the flesh, enough that I’m more worried about them than I am about HAL not being able to do that, Dave.
Humans evolved from pack hunters. We see the world as “us and them.” We are very naturally inclined to want to bucket groups into those two categories. And it’s easy to trigger that reflex, particularly when our triggers and levers are understood. Unfortunately, it’s often in someone else’s best interests to keep us separated, afraid, angry.
Big data is watching you
Data mining and analytics company Cambridge Analytica is now a surprise contender in a race with companies like EA, Spirit Airlines, and Equifax for “least popular company of the year.” In case you’ve been actually living under a rock, they apparently used some rather sketchy means to snag Facebook users’ data to create political propaganda that was exquisitely designed to hit the precise political triggers of a particular section of voters in the US of A. (Seriously, if you didn’t know that, close this blog and check out Channel 4 pronto. Spoiler Alert: Get ready for the best episode of Candid Camera ever.)
Now, there’s no such thing as an “apolitical” company. All companies stand for something simply because they reflect the interests and beliefs of the people behind them. Some companies are more political and some are less, depending on the corporate culture. Then there are those companies that provide services for helping in the political process. This could be a public opinion polling company, someone to help with media buys, or someone to manage the candidate and provide advice. Since President Obama was elected, there’s been increasing interest in companies to munge data and provide services like micro-targeting, ad optimization, social network optimization, etc.
Most of these companies are perfectly benign (insofar as things like “highly-targeted advertising that is perfectly matched to you” are benign). For example, the Obama campaign used Facebook micro-targeting as a community organizing tool. But Cambridge Analytica unfortunately crossed a line that will have lasting repercussions on—and I’m not exaggerating here—the world.
The strategy was simple: to identify and prey on users’ fears and hopes in order to encourage moderate or undecided voters to swing to the right – ushering in the Trump era. Preying on fears is hardly new. I’m pretty sure that the very first election, back in Zog the Caveman’s days, railed against the new, against the changes that Zog pushed.
Harvest season is here
So far, none of this is that big of a deal. However, this is where they (in my opinion), crossed the line into unethical behavior. It’s difficult and time-consuming to source clean, stable text or demographic data. And yet Cambridge Analytica managed to harvest the personal data it needed in record time – quickly enough to shape election outcomes.
A Soviet-born data scientist was contracted by Cambridge Analytica to figure it out. According to the Guardian, Dr Aleksandr Kogan advertised for people to take a paid personality quiz within the Facebook ecosystem, in doing so giving permission to access their Facebook profiles and their friends’ Facebook profiles. It only took about 320,000 quiz-taking “seeders” to harvest information on an incredible 50,000,000 Facebook users. These 50,000,000 users were not aware their data was being used for commercial or political purposes. (Ok, to be fair, my opinion on this is best summed up by this bit of stand up… I saw it on Reddit, and I can’t find the original piece, but it goes something like: “If you’re surprised Facebook is selling your data, then you are the reason why bags of peanuts display the warning ‘May Contain Nuts’.” This isn’t really Facebook selling data, but it gets across the vibe pretty well.)
While personal data can be collected for educational purposes, in the UK it’s illegal to sell it without consent. These unwitting users’ personal data became the training set for Cambridge Analytica’s models, as well as the backbone of their machine learning algorithm.
Messaging or manipulation?
Targeted advertising is nothing new. Nor is micro-targeting, a market segmentation approach in which data and demographics identify and leverage the interests, preferences and beliefs of small, niche groups. It’s been used by brands and political candidates for years. If you don’t have to actually move around physical objects, you can have a separate ad for each person. Completely unique. Turns out you don’t even need to do that, we do a pretty good job of bucketing ourselves, so, a few hundred or thousand ads will suffice.
All marketing persuades, spins, convinces. The most ethical of marketers (Hi!) are still trying to show their wares in the best light. What is genuinely freaky here is the sheer power of artificial intelligence married with good data – power to manipulate, power to affect a singularly strong democracy. In the words of Cambridge Analytica whistleblower Christopher Wylie, it’s a “psychological warfare tool.” It’s estimated that cost to Cambridge Analytica was minimal—just $1 million in data sourcing and $14 million for the analysis. These are figures well within the reach of most brands and political candidates. However, before you get too shocked, just ponder the sordid history of tobacco advertising.
Warning: Personal opinion ahead
I think there’s also a certain amount of shock. People looking for an explanation. “How could this have happened to us?” Well, maybe because enough people in the right states wanted a President Trump. And that’s an important aspect of this – marketing isn’t going to convince you to jump off a cliff if you weren’t already standing kinda close to the edge. Sure, it makes for a great villain story, with a sordid combination of privacy violation and computer-creepyness. But, really, back up a sec. Is anyone getting fussy about Amazon’s “hey, come buy this too” actions? Or the fact that Facebook chooses which articles to show you based on what will keep you clicking and engaged the most? These are all very much in the same ballpark.
I believe that Cambridge Analytica crossed a line by collecting data from users that they shouldn’t have. That was their only obviously unethical action. The rest of it was providing excellent service to the people that were paying their bills. If we want a different world, then we need to push for higher standards for privacy, better regulation, and, you know, maybe taking some of the money out of politics.