AI in Healthcare: Data Privacy and Ethics Concerns

  8 m, 12 s

According to a new study from the University of California Berkeley, advances in artificial intelligence have rendered the privacy standards set by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) obsolete. In fact, stripping healthcare data of identifying information doesn’t guarantee HIPAA compliance. Current laws simply aren’t sufficient to protect an individual’s health data. This is a problem in part because the same data is incredibly valuable for companies building an AI system. And as AI in healthcare becomes more and more commonplace, data privacy experts are raising big red flags about the ethical implications.

Consumer health data goes way beyond demographics

Healthcare providers, insurance agencies, biopharma companies and other stakeholders don’t just store typical consumer data (demographics, preferences, and the like). A typical patient’s healthcare file adds data on symptoms, treatments, and other health concerns.

AI in healthcare focuses on analyzing consumer health data to improve outcomes by suggesting diagnoses, reading medical device images, accelerating medical research and development, and more.

In this article, we’ll explore a few alarming ways AI solutions in healthcare are using consumer health data. As you read, consider this question:

How can we balance the potential for positive health outcomes with concerns over health data privacy, ethics, and human rights?

The largest social media platform uses AI to store and act on users’ mental health data with no legal safeguards in place

Healthcare privacy regulations in place from HIPAA don’t even begin to cover tech companies. HIPAA only protects patient health data when it comes from organizations that provide healthcare services, such as insurance companies and hospitals.

This upshot of this was demonstrated in late 2017 when Facebook rolled out a “suicide detection algorithm” in an effort to promote suicide awareness and prevention. The system uses AI to gather data from your posts and then predict your mental state and propensity to commit suicide.

Source: TechCrunch

The Facebook suicide algorithm also happens to be outside the jurisdiction of HIPAA.

Of course, this is definitely a positive use case for AI in healthcare. But benevolent intent aside, the fact remains that Facebook is gathering and storing your mental health data. And they’re doing it without your consent.

On a related note: Remember when Facebook revealed a data breach of 30 million profiles in September? They declined to comment on whether data from the suicide prevention algorithm was compromised.

Also, no one really knows what they’ll do with the data they gather, beyond their stated purpose.

“In principle, you could imagine Facebook gathering step data from the app on your smartphone, then buying health care data from another company and matching the two,” says Anil Aswani, lead engineer on UC Berkeley’s privacy study.

“Now they would have health care data that’s matched to names, and they could either start selling advertising based on that or they could sell the data to others.” (Emphasis ours)

Genetics testing companies are selling customer data to pharma and biotech firms

HIPAA also fails to regulate genetics testing companies like Ancestry and 23andMe. These companies, which analyze your DNA to give you information about your health, traits and ancestry, don’t legally count as a healthcare service. And so, HIPAA rules don’t apply.

What’s more, 23andMe don’t exactly help. Their “Biobanking Consent Document” obfuscates exactly what DNA information the company can keep and how they can use it.

“By choosing to have 23andMe store either your saliva sample of DNA extracted from your saliva, you are consenting to having 23andMe and its contractors access and analyze your stored sample, using the same or more advanced technologies.”

[23andMe-Biobanking-Consent-Document.png]

Source: 23andMe

23andMe also obfuscates how long they can hold onto your genetic data. “Unless we notify you otherwise,” they say, they can store your DNA for up to 10 years.

And don’t forget: These companies can and do legally sell their customers’ genetic data to pharmaceutical and biotechnology firms.

In fact, just this year 23andMe and pharma giant GlaxoSmithKline signed a 4-year, $300 million deal that will allow GSK to access and analyze 23andMe’s stockpiles of their customers’ genetic data.

The stated goal of the partnership is “to gather insights and discover novel drug targets driving disease progression and develop therapies for serious unmet medical needs based on those discoveries.”

But would you say you really gave 23andMe informed permission to sell your genetic data? And how else could your DNA information be used?

Insurance companies could use genetics data to bias selection, pricing and more

Genetics testing companies don’t just tell you about your ancestry. They can also provide valuable information about your genetic predisposition to certain health risks. In fact, in 2017 23andMe received regulatory approval to analyze their customers’ genetic information for risk of 10 diseases, including celiac, late-onset Alzheimer’s and Parkinson’s.

On the one hand, this is great for customers who can use this information to take preventative action or plan for the future. On the other hand, insurance underwriting experts are raising flags. They warn that, among other uses, insurance companies could use this predictive genetic testing to bias selection processes and charge higher premiums.

What’s more, due to higher statistical prevalence of certain diseases among certain demographics, there is a huge risk that incorrectly-trained insurance algorithms will demonstrate overtly racist and sexist behavior if left to their own devices.

Thankfully, consumers do already enjoy some protections from these nightmare scenarios. In the US, the Genetic Information Nondiscrimination Acts (GINA) protects individuals from genetic discrimination in employment and health insurance.

However, these protections don’t cover every situation. For one, GINA does not apply when an employer has fewer than 15 employees. It also doesn’t cover military service people or veterans treated by the Veterans Health Administration.

Worst of all, GINA only protects consumers from genetic discrimination by health insurance companies. Life insurance, long-term care and disability insurance still don’t have their own consumer protection regulations.

To safeguard consumer health data, the US should learn from the EU

Clearly, HIPAA’s consumer health data privacy protections fall far short. What can we do? How can we safeguard individuals, while leaving tech companies free to build innovative solutions for AI in healthcare?

The European Union’s General Data Protection Regulation (GDPR) is a start. GDPR harmonizes data privacy laws across Europe, protects EU citizens and reshapes the way organizations in the region approach data privacy.

In fact, GDPR mandates that organizations must have informed user consent before they collect sensitive information. Perhaps not coincidentally, Facebook announced that their “suicide algorithm” isn’t coming to the EU – though of course they didn’t explicitly mention GDPR.

Note that policies like the GDPR don’t make a distinction between “tech companies” and “organizations that provide health services”. Under the law, all organizations must obtain informed, explicit consent to collect user data. This comprehensive policy approach both creates effective consumer protections and enforces them with heavy fines and penalties.

But it’s not all good news on the European front

In another case of AI healthcare controversy in Europe, the United Kingdom’s Information Commissioner’s Office (ICO) found that an agreement between Google’s DeepMind and the Royal Free London NHS Foundation Trust breached UK law.

As part of their partnership, the Royal Free Trust (RFT) provided massive quantities of patient data to Google. DeepMind’s AI then ingested and analyzed that data, with the goal of improving patient care by helping doctors to read medical imaging results and suggesting diagnoses.

Unfortunately, the ICO found that the Royal Free Trust failed to adequately inform patients that their data would be used as part of certain tests. “Patients would not have reasonably expected their information to have been used in this way, said Elizabeth Denham, Information Commissioner, “and the Trust could and should have been far more transparent with patients as to what was happening.”

Among other requests, the ICO asked the Royal Free Trust to commission a third-party audit of their practices. This audit, completed in 2018, concluded that RFT’s program was lawful and complied with data protection laws. But it should be noted that the auditors were engaged by RFT themselves. And lawyers are already questioning why the ICO didn’t give RFT a more serious penalty.

With AI in healthcare, we must consider the human element

Of course, there isn’t a single solution to our concerns over AI in healthcare. But it’s important to raise the question and discuss past and potential outcomes.

Lexalytics CEO Jeff Catlin has seen many AI systems crash and burn over the years. He argues that although AI is an incredible tool, it’s still just a tool. “AI can’t be expected to do it all,” he says. “It can’t take the challenging problems out of our hands for us. It can’t solve our ethical dilemmas or moral conundrums.”

Indeed, the humans behind that tool must take ownership. We are responsible for how we use them. And we must make sure that privacy policies, and ethical standards in general, keep pace.

And we can at least make a few general recommendations.

First, tech companies must be required to educate users on exactly how their data will be used. Then, users must have the option to decline consent.

Second, technology companies themselves need to step up and take the initiative to build appropriate technical safeguards into their AI healthcare solutions. Where they fail, we must hold them accountable.

Third, governments must implement up-to-date legal frameworks surrounding healthcare data. AI and healthcare companies big and small have proven that they can’t be trusted to always self-regulate here.

Once we’ve cleared these ethical and moral hurdles, AI solutions for healthcare will deliver invaluable medical breakthroughs and quality-of-life improvements. But we can’t forget the humans whose data those AIs will learn from.

Categories: Artificial Intelligence, Newsletter