Artificial intelligence (AI) is the SIFK research theme for 2020-2022, encompassing everything from its historical roots, uses and biases in modern AI applications, to future implications of emerging developments.
Jordan Bimm and Giacomo Cetorelli
Imagine waking up floating in space. You are 100 million miles from the Earth and yet still 200 million miles from your destination, the planet Mars. And something is wrong. Your smart watch is blinking and vibrating with urgency. With a tap your health app informs you that the biomedical monitoring devices embedded in your clothing have detected something anomalous. A virtual physician powered by artificial intelligence (AI) appears on a nearby screen and calmly asks, “How did you sleep last night?”
This scenario reads like science fiction, but it is a future quickly becoming reality as the medical management of astronauts begins to transfer from human doctors to AI. With plans for long-duration human missions to distant places like the planet Mars coming into focus, some experts at NASA and private space companies like SpaceX are considering the potential for automated diagnosis, treatment—and even prediction—of in-space ailments. But given recent scholarship showing how AI and machine learning can reproduce medical bias based on race, gender, age, and income level, it is vital to consider the implications for uses in space.
Since the dawn of human spaceflight in the early 1960s, flight surgeons in mission control have remotely monitored and attended to the health of astronauts in real time. In fact, as early as 1949 a new field of medical research and practice called space medicine was established to address the biological problems of spaceflight, which include acceleration and deceleration, temperature extremes, low atmospheric pressure, radiation exposure, and the disorienting effects of weightlessness or zero-G. Early flights during the Space Race and the Space Shuttle era proved humans could within stand these hazards for periods lasting days or weeks.
But the human body evolved in Earth’s atmosphere and gravitational field, and as missions to the International Space Station (ISS) extended astronaut stays to many months and even over a year, new challenges of prolonged exposure to microgravity and radiation have appeared, including weaking bones, blood clots, deteriorating vision, and the looming prospect of radiation induced cancers. These, and other unforeseen issues, pose problems for plans to get humans to Mars and establish permanent settlements there. If you get sick, who—or what—takes care of you?
The possibilities for medical care in space exist on a gradient between having multiple human physicians present on the mission as crewmembers to simply having access to basic medical resources. On the ISS, astronauts are sometimes medical doctors themselves, but most often two crew members are trained in advance to have something like an equivalent competency between them. The ISS is well-stocked with medical supplies and equipment, and the crew can easily converse with doctors on Earth via video. In an emergency, the crew could choose to evacuate the ISS and be back on Earth in a matter of hours (although this hasn’t happened in the station’s 20-year history).
Missions to deep space change this game significantly. Minimizing what you need to take is seen as a governing principle, so human doctors, supplies, and equipment will be limited. Also, the further you get from Earth, the lag in communications increases from seconds to minutes, meaning video links to doctors will quickly cease to be real-time interactions. Finally, returning to Earth for treatment of serious injury or illness will be impossible. This is what makes the concept of AI-powered medicine so appealing. It would be less costly to include, it could interact with crew members in real-time, and it could recommend treatment or even perform procedures right away in an automated robotic surgical bay in the spacecraft or surface habitat.
NASA has already tested a new system called Astroskin (developed by Quebec-based company Hexoskin in partnership with the Canadian Space Agency) on astronauts aboard the ISS. Astroskin looks like something you might don for your morning workout—a nondescript black jersey and headband. But inside the garment are a suite of miniaturized biomedical sensors that record a wearer’s heart rate, respiration, oxygen levels, blood pressure, skin temperature, and even their level of activity and movements.
One possibility being explored by NASA and other space agencies is to feed this physiological information not to a human flight surgeon back on Earth, but to an AI program onboard the spacecraft pre-trained on large batches of medical data. Not only could an AI program like this detect, diagnose, and suggest a treatment, but it could also even be able to predict future health problems in astronauts earlier than their human counterparts. It might feel strange to take medical advice or receive treatment from an autonomous machine, but 100 million miles from the nearest hospital, you might not have a choice.
At first glance, this all seems like a viable, even exciting, solution. But potential pitfalls appear with some perspective from medical humanities and science and technology studies. Recently, scholars including Ruha Benjamin, Virginia Eubanks, and Meredith Brossard have documented how uses of AI and machine learning in the contexts of terrestrial life like medicine, advertising, human resources, and real estate reinforce preexisting inequities along race, gender, and class lines. Benjamin points out that automated systems “hide, speed, and deepen racial discrimination behind a veneer of technical neutrality.” We often mistakenly believe that machines or computer codes are value-neutral, when in fact, they are stealthy conduits for power and politics.
Benjamin highlights the importance of history when considering future-facing AI, writing, “data used to train automated systems are typically historic and, in the context of health care, this history entails segregated hospital facilities, racist medical curricula, and unequal insurance structures, among other factors.” This insight raises an important question for space: Whose medical data will train an AI used to care for astronauts? This choice will have major knock-on effects. If a space agency decided to train a medical AI on data from the bodies of existing astronauts and spacefarers, the program will receive a heavily skewed set favoring white, male, middle-age Americans.
Right from the start, back in the 1950s, space medicine experts working for the U.S. military and later NASA assumed future astronauts would be almost exclusively white men. Decades of selecting and studying class after class of white, male, military test-pilots led to space medicine establishing this type of body as a baseline “normal” for space. From 1961 until 1983 every American who flew in space was a healthy white man over 30 years of age.
Even after NASA opened the astronaut corps to women and visible minorities in the late 1970s and began to fly them on space shuttles in the early 1980s, the population of Americans who have visited space does not remotely reflect the wider public. For example, as of writing, only about 10% of the humans who have been to space have been women. Training an AI on existing astronaut medical data would exacerbate a bias towards a white male normal and would reproduce historical inequities as Benjamin describes. If the data used to train the AI is not representative of a diverse group of people, then it will not be able to detect problems with equal efficacy for everyone.
This type of insidious problem could cascade through all phases of a space mission, beginning on Earth. Major companies are already handing over some aspects of personnel recruitment to AI, what if this happens for space too? We can imagine a future in which astronaut candidates must agree to have their social media accounts scraped and scrutinized by an AI trained to predict who is likely to be successful in space and who is likely to remain healthy for a long-duration mission. An AI trained on historical astronaut data would make biased decisions and exclude vast swaths of humanity from these off-world opportunities and futures.
If the AI does determine you have “the right stuff” and you are selected for a mission, how well would a medical AI be calibrated not only to the particularities of your body, but to the unique environment of outer space? Establishing a new “space normal” for the interface between bodies and the space environment (and Mars) will take time, and could lead to an AI missing a problem, or making a false diagnosis, initiating wasteful or harmful treatment. Another possibility comes to mind: what if you disagree with the medical AI? How exactly would you go about getting a second opinion? What if you are uncomfortable with or mistrust the AI’s proposed treatment? Will astronauts hired by NASA or private space companies be free to refuse or reject certain extreme forms of treatment recommended by a computer?
Another issue is privacy and the sharing of medical data, already an issue with big tech and digital patient data on Earth. Should an AI physician be programmed to send all medical data, diagnoses, and treatment plans back to mission controllers on Earth? With astronauts wearing garments similar to Astroskin, this type of medical monitoring would also function as a heavy form of workplace surveillance. What degree of privacy should astronauts and future space travelers have? What degree of privacy is necessary for their mental health and a continued sense of well-being? Perhaps astronauts will be assured their interactions with an AI physician will be totally confidential, when in reality mission controllers have secret backdoor access to this data (all in the name of ensuring mission success, no doubt).
A final point is that the physical dimensions of the biomedical monitoring gear—the successors to the Astroskin concept—may also deny access to some. Bodies that don’t fit could be excluded from spaceflight. In 2019, the first-ever all-women spacewalk conducted outside the ISS was delayed from March until October due to problems with the fit of EVA suits designed for men. Medical monitoring garments required for AI management of astronaut health not designed with a wide variety of body types in mind will pose an unintentional barrier for who is able to go to space.
This future isn’t here yet, but all the pieces for it are in place. This early moment in the development of this set of technologies is the best time to be raising concerns and asking difficult questions, since change is still possible. After all, space is not a utopian transformative place; space is a place where all of our Earthy problems get reproduced or amplified, and this includes problems with medicine and artificial intelligence.
Research for this article was conducted by Giacomo Cetorelli as part of a Research Assistantship awarded by the Stevanovich Institute on the Formation of Knowledge.
 For decades science fiction writers have imagined malevolent and medical space-based AI systems. Readers will no doubt think of Stanley Kubrick and Arthur C. Clarke’s HAL 9000 (Heuristically programmed ALgorithmic computer) from 2001: A Space Odyssey (1968), and also possibly Star Trek: Voyager’s (1995) automated holographic chief medical officer known simply as “The Doctor.”
 Graham Mackintosh, “AI Applications for Astronaut Health” (October 7, 2020) Presentation to NASA Ames Research Center. https://www.nasa.gov/sites/default/files/atoms/files/space_portal_graham_mackintosh.pdf
 Kirsten Ostherr, “Artificial Intelligence and Medical Humanities” in Journal of Medical Humanities (2020): https://link.springer.com/article/10.1007/s10912-020-09636-4
 Catherine Zuckerman, “One-of-a-kind Study of Astronaut Twins Hints at Spaceflight’s Health Effects” in National Geographic (April, 2019): https://www.nationalgeographic.com/science/article/study-of-astronaut-twins-hints-at-spaceflight-health-effects
 Jasmin Malik Chua, “Astronaut Suits Up in ‘Smart’ Astroskin Jersey on Space Station” in Space.com (February, 2019): https://www.space.com/43228-astroskin-smart-space-jersey-for-astronauts.html
 Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Wiley, 2019); Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (Macmillan, 2018); Meredith Brossard, Artificial Unintelligence: How Computers Misunderstand the World (The MIT Press, 2019).
 Ruha Benjamin, “Assessing risk, automating racism” in Science, 366 (No. 6464, 2019) pp. 421-422, https://www.science.org/doi/abs/10.1126/science.aaz3873
 Jordan Bimm, “Andean Man and the Astronaut: Race and 1958 Mount Evans Acclimatization Experiment” in Historical Studies in the Natural Sciences, 51 (3, 2021) pp. 285–329.
 Jordan Bimm, “Canada’s Space Program has a Diversity Problem” in The Toronto Star (July 9, 2017), https://www.thestar.com/opinion/commentary/2017/07/09/canadas-space-program-has-a-diversity-problem.html
 Anthropologist of extreme exploration Valerie Olson uses the term “space normal” to describe the recalibration of “Earth normal” terrestrial medicine. Valerie Olson, Into the Extreme: U.S. Environmental Systems and Politics Beyond Earth (University of Minnesota Press, 2018).
 Rob Copeland, “Google’s ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans” The Wall Street Journal (November, 2019): https://www.wsj.com/articles/google-s-secret-project-nightingale-gathers-personal-health-data-on-millions-of-americans-11573496790
 Jacey Fortin and Karen Zraick, “First All-Female Spacewalk Canceled Because NASA Doesn’t Have Two Suits That Fit” in The New York Times (March 2019): https://www.nytimes.com/2019/03/25/science/female-spacewalk-canceled.html