There’s a certain magic to swiping, typing, and tapping to monitor or even improve one’s health. The internet, broadband, and mobile technology have spurred inventions that have radically transformed our lives, including our health. Even people who don’t consider themselves to be tech-savvy are counting their steps, tracking their heart rates, and monitoring their blood glucose levels. And some people will even tell you their lives depend on that data.
At Luminary Labs, we see the promise of technology as a force for good. And yet, our experience working at the intersection of health and emerging technologies — such as voice tech, AR/VR, and artificial intelligence — requires us to consider the unintended consequences.
More than 18 months ago, we issued a call for health tech ethicists. Since then, tech ethics headlines have saturated the news — from Facebook’s Cambridge Analytica scandal to cities banning facial recognition technology. A couple years ago, the health tech ethics world was smaller; it was difficult to find people talking about what could possibly go wrong. Today, the list is long and growing. Researchers are publishing papers, governments are questioning technology that’s not quite ready, and companies are considering their roles in a world where we’re building the plane as we’re flying it. Just this week, Kaiser Permanente’s Nick Dawson asked Twitter users to help his team predict “the potential dystopian futures that might come from telemedicine and virtual care and AI.”
The evolving conversation around health tech ethics is promising, but we’re still just scratching the surface. Healthcare AI startups are raising more money than AI companies in other industries; as technology investment and adoption accelerates, we should be considering tough questions at the intersection of health, tech, and equity.
If you share our passion for connecting these important dots, you’ll want to explore these resources — some are recent articles and papers; some have been sparking conversations for months. Here’s what we’ve been reading:
- Building the case for actionable ethics in digital health research supported by artificial intelligence (Camille Nebeker, John Torous, and Rebecca J. Bartlett Ellis). “As the ‘Wild West’ of digital health research unfolds, it is important to recognize who is involved, and identify how each party can and should take responsibility to advance the ethical practices of this work.”
- Fitbits and other wearables may not accurately track heart rates in people of color (STAT News). “Nearly all of the largest manufacturers of wearable heart rate trackers rely on technology that could be less reliable for consumers who have darker skin.”
- Data from health apps offers opportunities and obstacles to researchers (The Verge). “Researchers are eager to tap into the steadily expanding pool of health information collected from users by products like Fitbit, Clue, and the Apple Watch. But while these datasets could be a scientific treasure trove for scientists, they also pose logistical and ethical challenges that need to be addressed.”
- Racial bias in AI isn’t getting better and neither are researchers’ excuses (Vice). “In 2019, AI developers should know that algorithmic bias not only exists but is a serious problem we must fight against. So why does it continue to persist? And can we actually stop it?” (See also: MIT Media Lab researcher and Algorithmic Justice League founder Joy Buolamwini’s TED Talk on fighting algorithmic bias.)
- Invisible Women (99% Invisible). “The vast majority of medical research, for instance, is based on studies of men… Car crash test dummies are also generally male, based on an average man, which of course means they feature different sizes and proportions than a typical female.”
- ‘Automating Inequality’: Algorithms in public services often fail the most vulnerable (NPR’s conversation with author Virginia Eubanks). “Eubanks knows she could have turned out a pretty portrait of three different automated systems elsewhere in the country that were providing services effectively. But she says wanted to give a voice to the vulnerable people — families to whom she said these systems looked ‘really different than they look from the point of view from the data scientists or administrators who were developing them.’”
- Weapons of Math Destruction: Invisible, ubiquitous algorithms are ruining millions of lives (Boing Boing’s review of Cathy O’Neil’s book). “Discussions about big data’s role in our society tends to focus on algorithms, but the algorithms for handling giant data sets are all well understood and work well. The real issue isn’t algorithms, it’s models. Models are what you get when you feed data to an algorithm and ask it to make predictions. As O’Neil puts it, ‘Models are opinions embedded in mathematics.’”
This year, we’ve published a series of Problem Spotlights on the opioid crisis, upskilling and reskilling, building a thoughtful space economy, and the aging of America. Next up: algorithmic bias in health. What should we know about this problem? Who’s working on a solution? Make us smarter: Send your tips to email@example.com.