\”Algorithmic Discrimination”: Is Big Tech Using Our Own Posts Against Us?

It was just a matter of time. Machine learning can predict our health issues simply by assessing the way we behave online. Why do tech companies track us so closely, and what do they want with our health profiles? Mason Marks, M.D., who teaches law at Gonzaga University, warns us to look out for online advertisements that try to tap into our fears and addictions or discriminate against certain internet users with real-life consequences. 

Consider something as innocent as a Facebook status update. Updates that have nothing to do with physical or mental states contain indicators for artificial intelligence to process and predict issues like diabetes, anxiety, or depression. Facebook says it does a public service by mining information that points to suicide risks, then alerting law enforcement, so the person presumed to be at risk is taken to a hospital — possibly by being arrested. And recently, Amazon Alexa linked up with Britain’s National Health Service to offer medical advice. Amazon says it won\’t make user profiles out of user communications, but critics say the collaboration presents a way for Amazon to monitor user activity. Or, as Dr. Marks puts it, to go “mining for emergent medical data.

What About Medical Privacy Rules?

Medical professionals must protect our health details. They must comply with privacy standards in HIPAA — the Health Information Portability and Accountability Act.

But who reins in Facebook, Google or Amazon as they sift through our online habits or outbursts? Tech companies can circumvent anti-discrimination laws, too, as they grab personal details we never consented to hand over.

Monetizing the results of health data mining becomes a health threat itself. Imagine people with self-destructive habits, targeted with coupons that feed these habits and erode their safety and potential recovery.

Senator Amy Klobuchar (D-MN) introduced the Protecting Personal Health Data Act on June 13, 2019. The bill intends to shield quasi-health data collected by diet and fitness apps, social media platforms, and DNA testers. But it makes exceptions for products that collect personal health data by assessing non-health information such as location data. Because it will exempt some of the most dangerous data mining, Dr. Marks calls the bill a step in the wrong direction.

How Do Algorithms Discriminate?

In algorithmic discrimination, machine-made decisions include certain groups of people, and exclude others. A discriminatory job posting algorithm might hide job postings from people on account of their race, age, sex, sexual identity, or medical condition.

Real-life examples have already happened. One happened recently, putting Facebook and Twitter under investigation when housing ads weren\’t shown to people in certain neighborhoods — a form of digital redlining. 

The Department of Housing and Urban Development could make matters worse with its proposed rule that would let housing ads stand if their inputs aren’t targeting certain user characteristics. That would permit the use of data inputs that appear innocent, Marks explains, but are actually rich with personal identifiers. A study involving Facebook illustrates how random-seeming posts can predict vulnerabilities. Using totally non-medical posts, researchers linked swearing on social media to smoking, drinking, and drug use. The researchers even found particular words that indicated the user\’s drug of choice.

We Need Real Solutions.

When language cues on Facebook can be processed and associated with vulnerabilities, people might be mentally and financially manipulated, or denied access to resources and jobs with no obvious reason.

In short, big tech companies, with the help of machine learning, have discovered new pathways for social control, exploitation, and discrimination. It\’s time to erect effective legal barriers.