Hallucinating AI is a Big Security Problem

  3 m, 32 s

This week in AI News & Insights: hallucinating AI is causing security headaches and potential disasters; a new, 101-page report details dozens of potential malicious uses of artificial intelligence; and California legislators are taking action to protect you, your data and your privacy.

 Hallucinating AI? It’s not a joke

Artificial intelligence has enabled Apple FaceID, self-driving cars and automatic Facebook friend-tagging in photos. But, as Wired now reports, these complex deep-neural-networks have a dangerous weakness. Small changes can cause them to perceive things that aren’t there.

That’s right, top computer scientists are making AI hallucinate.

Take this example from the Wired article, where an MIT grad student tricked Google’s Cloud Vision service into seeing a dog inside a picture of two humans on skis.

Google’s Cloud Vision service is easy to trick, from https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/

And there’s more. Late last year, researchers at Berkeley Artificial Intelligence Research lab demonstrated how to trick neural networks into misclassifying road signs. Just imagine a self-driving car that mistakes a stop sign for a 70-mile-per-hour speed limit marker. Indeed, as AI becomes ubiquitous, the consequences of hallucinating AI become ever more dangerous.

(Also: BAIR makes me think of cyborg bears, another potential AI threat.)

Of course, humans aren’t immune to sensory trickery either. But we have an advantage most computers lack: we can incorporate context into our decision-making. When AI systems are trained on limited data sets and with little regard for consequence, a single hallucinating AI system can cause disaster. So what’s an AI engineer to do?

According to two of the researchers cited by Wired, the AI field “should adopt practices from security research, which… has a more rigorous tradition of testing new defensive techniques.”

Learn more about AI’s new hallucination problem on Wired.com

New report on malicious uses of AI

This new report on malicious uses of artificial intelligence is the most thorough we’ve ever seen. It comes courtesy of 26 authors from 14 institutions, spanning academia, civil society and industry. The Center for a New American Security (CNAS), one of the report’s co-sponsors, writes, “This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.”

The full report tops out at 101 pages, but Paul Scharre and Gregory C. Allen have written a fascinating executive summary of the report. Scharre and Allen clarify that they don’t resolve the question of “long-term equilibrium” between attackers and defenders. But they do make four high-level recommendations:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse- related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

In addition, the report proposes several areas of new research, outlines three wide security domains, and includes some deep strategic analysis.

Browse “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” at the Center for a New American security

California takes action to protect your privacy

Artificial intelligence can transform industries, but California lawmakers are worried about your privacy” – Jazmine Ulloa, LA Times

Well, at least someone’s worrying.

All snark aside, it’s nice to see lawmakers taking a stand. In the age of big data, security and privacy have been neglected. And as AI insinuates itself into our daily lives, regulations need to catch up. (Especially now that hallucinating AI is a real problem.)

Read more about new California privacy laws from the LA Times

Weekly AI News & Insights

Stay tuned to our blog for more weekly AI news and insights articles, interest pieces and thought leadership.

In the meantime, why not learn how Lexalytics can transform your text data into insights and money? Drop by our website for more info.

Categories: Newsletter