Health-Related Biometrics Open Up Privacy Risks
Detecting clinical depression from vocal patterns, identifying an individual through their way of walking, determining a caller’s emotion during a customer service session—these are examples of biometric analysis in use today. Both the science of biometric analysis and the exploitation of such analysis has progressed far beyond fingerprints and facial recognition.
The Mozilla Foundation describes itself as “the non-profit, movement-building, and philanthropy arm of Mozilla,” which is the technology company behind the Firefox web browser. The foundation, which has been zealously pursuing privacy on the Internet for years, recently released a report titled From Skin to Screen: Bodily Integrity in the Digital Age, written by Júlia Keserű, a Senior Tech Policy Fellow there. I interviewed Keserű to discuss the implications of her research and her plans for activism concerning biometrics and privacy..
Keserű asserts that biometrics require a new set of laws and regulations. Even the European Union, with its General Data Protection Regulation (GDPR) and its recent AI Act regulating artificial intelligence, has inadequate tools to deal with the extensive human exploitation that is possible now with biometrics.
The combination of cheap, powerful sensors with the use of AI to extract hidden patterns has created exciting new ways to diagnose complex medical conditions. But we will see in this article the risks of biometrics and, in contrast, its promising message for human autonomy.
Heading Toward Ubiquitous Technical Surveillance
Biometrics need to be taken seriously now because they are becoming widespread—and they work.
Keserű told me, for instance, that emotion recognition was unreliable till recently because theories were flawed—for instance, researchers would assume that a smile always indicates happiness. As if to illustrate the poverty of that assumption, I responded to her statement with a smile in rueful recognition of how commonly researchers oversimplify.
When motion recognition moved to more sophisticated techniques, which Keserű calls “multi-modal emotion recognition,” it became much more accurate.
Biometrics drive a form of tracking that ranges from basic identification to sentiment analysis. Without further legal protections, we will enter an era of Ubiquitous Technical Surveillance (UTS).
The term UTS was invented by the U.S. Department of Defense to warn of activities by foreign intelligence agencies to follow spies and military personnel. But the term just as well applies to the tracking of ordinary civilians by governments and commercial institutions.
Such tracking leads to several harms. These can be loosely summarized into two categories.
First, biometrics can be used to target people (note the violent military metaphor in “target”) for harms ranging from exploitative marketing to denial of insurance coverage or loans.
Second, institutions can also discriminate against people on the basis of perceived traits derived from biometrics, often biased because of skewed data sets or simplistic models. I pointed out the biased potential of data mining in a 1998 article, written long before AI was widely used in these commercial activities.
Finally, AI can identify or re-identify individuals from supposedly anonymized data.
Some people are so powerful that, no matter how much damaging information is released about them, they can do anything they want. But most of us feel the impact of decisions made by other people: our employer, our landlord, our insurance company, our bank, our probation officer—even an ex-spouse or other family member. We are at risk of being tracked through our biometrics, and then relegated to a disempowered class of people who are accorded diminished rights.
Toward a Privacy Framework
Chapter 1 of Keserű’s report lays out uses of biometric techniques in mobile health and other contexts, listing industry trends (which include potential beneficial uses of biometrics), possible harms, and legal protections.
In Chapter 2, Keserű summarizes surveys she conducted for the Mozilla Foundation about public perceptions of biometrics. The studies found that many people are already providing biometric information for mobile health apps and other uses, but are worried about the potential for their information to be shared without their permission and used against them. Not surprisingly, people are happier with seeing the data used for scientific research than for marketing or law enforcement.
Chapter 3 is the central contribution of Keserű’s report, proposing a new framework for protecting biometrics that builds on the legal concept of bodily integrity articulated in many countries.
While bodily integrity is meant to protect people against physical harms such as rape, Keserű recommends extending it to data about the body and invents the term “databody integrity” for this new protection.
She follows up this conceptual innovation with a chapter of specific recommendations for policy makers, technological inventors, and other actors. Many of these recommendations echo common security principles, such as checking for flaws in technology that leave it open to cyberattacks. When I questioned Keserű about the need to cite such basic principles, she replied that the low level of current security checks require a reiteration of such fundamentals. With a grim realism, she said, “Many activists are aiming too high” and calling for advanced practices when bedrock activities are not yet in place.
The Problem of Consent
The concept of informed consent is certainly an improvement over the Wild West that preceded it in health care through the exploitation of individuals, such as famously documented in Rebecca Skloot’s book The Immortal Life of Henrietta Lacks. But observers admit that informed consent still doesn’t adequately protect patients.
I have meticulously read multipage consent documents concerning my health data, and have refused to participate in research because of unacceptable terms in those documents. But most people never attempt to read them, or fail to understand the implications of their clauses.
One trend is toward an “open consent model” that would allow widespread sharing of patient data for research. The system encourages people to share their genomic data after completing a difficult questionnaire about impacts and privacy. I can’t imagine anybody taking the time to study the material for this questionnaire and actually following through.
Fundamentally, I don’t believe that individuals can be taught to protect themselves against invasive data processing, any more than school children can be taught to protect themselves against pervasive gun violence. The environment itself must be made safe.
Interventions Into Policy
Keserű is not a disinterested academic; she intends to fight for the principles she espouses. After finishing her fellowship with the Mozilla Foundation, she told me, she intends to pressure governments, working with lawmakers in Washington, DC and Brussels.
She expresses guidelines for advocacy. For instance, while one should advocate measures to reduce bias and improve accuracy, one can also question the uses of biometric data in the first place. She also prefers to talk about “harms” rather than “ethics.” And as with all organizing drives, telling people’s stories from real life is crucial.
I think there’s even a bigger lesson here.
Biometrics alert us in ways we never knew before to the amazing variety of human life. Those who use biometrics to categorize us, reward or penalize us, and design marketing campaigns for us are trivializing our existence as sentient actors. Instead, let’s learn from the advances in biometrics to admire our sanctity as individuals.
Get Fresh Healthcare & IT Stories Delivered Daily
Join thousands of your healthcare & HealthIT peers who subscribe to our daily newsletter.
link