Imagine a physician at Stanford University Hospital starting her morning rounds. It is barely eight in the morning. The smell of overnight coffee and antiseptic permeates the hallway. A woman in her early forties, a restaurant manager who was admitted a few days prior with a serious infection that has since been controlled, has an alert in her electronic health record while she is going over her patient list. An algorithm is used to generate the alert. After scanning the patient’s medical records, it generated a prediction: the woman has a very high chance of dying within the next 12 months. The physician examines the warning. Then the patient’s room across the hall. Next, look back at the screen. The patient is unaware that this evaluation is available.
That is not a hypothetical situation. Since July 2020, Stanford Hospital has been using the Advance Care Planning model, also known as the death predictor outside of hospital administration settings. It has processed over 11,000 patients who have been admitted to the hospital since its launch. Twenty percent or more have been flagged. The program determines a patient’s likelihood of passing away within the next month, three months, six months, or year. It was created by the university’s Center for Biomedical Informatics Research and put into use by Dr. Steven Lin’s AI team. Speaking candidly about its potential, Dr. Lin said, “AI can actually predict pretty accurately when people are gonna die.” The goal is to start conversations about end-of-life care before they turn into emergencies, which is motivated by compassion and medical reason. This approach has far more complex ethical implications.
| Stanford Hospital Tool | Advance Care Planning (ACP) model — live since July 2020; scans EHR records to predict 1-month, 3-month, 6-month, 12-month mortality risk |
| Patients Processed (Stanford) | 11,000+ patients; 20%+ flagged with high mortality risk |
| Stanford Research Team Lead | Dr. Steven Lin, Stanford Healthcare AI Applied Research Team |
| Life2vec (Denmark) | AI trained on 6 million Danes; 78% accuracy on early mortality prediction (ages 35–65); published in Nature Computational Science |
| Johns Hopkins AI (2025) | AI model predicts sudden cardiac arrest risk; outperforms existing clinical algorithms |
| Early AI Mortality Accuracy | “More than 90% of people predicted to die in the next 3–12 months did so” (per algorithm study noted in Medium/Berkeley) |
| Key Ethical Principles at Stake | Autonomy, beneficence, non-maleficence, justice, explicability, professional governance (PMC/NIH study, 2023) |
| Documentation Problem | Stanford HAI (2021): Hospital AI models “not well documented” — users blind to training data flaws and calibration drift |
| Equity Concern | Yolonda Wilson (Saint Louis Univ.): Race and socioeconomic bias may be “baked into” training data — affecting who gets palliative care conversations |
| Reference | NIH/PMC — Ethical Considerations in AI Mortality Prediction ↗ |
These systems’ accuracy has been increasing so quickly that the philosophical issues are no longer theoretical but urgent. Using anonymized data from six million Danish citizens, researchers at the Technical University of Denmark created the life2vec project, which has a 78% accuracy rate in predicting early mortality in individuals between the ages of 35 and 65. The researchers claim that this is better than any competing algorithm they could find. The model analyzes human lives as sequences of events, such as births, doctor visits, school, marriages, and job changes, and extracts predictive patterns from those sequences, just as large language models analyze text. When tested in clinical settings, an earlier algorithm showed that over 90% of the individuals it predicted would pass away within three to twelve months actually did so. An AI model could greatly outperform current clinical algorithms in identifying patients at the highest risk of sudden cardiac arrest, according to research published by Johns Hopkins in 2025. In a limited technical sense, the numbers are now impressive.
The framework for determining what to do with these predictions once they are made has not kept up. Six ethical principles—autonomy, beneficence, non-maleficence, justice, explicability, and professional governance—come into conflict when these systems are implemented in practice, according to a 2023 study that was published in the NIH’s digital health journals. The study involved 18 medical professionals who were interviewed about their experiences with AI mortality tools in emergency departments. The Swedish research team came to the conclusion that the standard principles of medical ethics are insufficient to adequately describe the ethical aspects of AI in clinical settings. When you consider the actual situation, which is a patient lying in a hospital bed while a computer creates a probability estimate about their survival based on years’ worth of medical records and demographic data—possibly without the patient’s knowledge or consent, and with no obligation for the doctor to reveal the prediction or its basis—that conclusion seems obvious.
The discomfort becomes more acute in the equity dimension of these tools. Saint Louis University associate professor of health care ethics Yolonda Wilson has publicly maintained that the biases present in training data do not go away just because the result is a numerical value. A model that is trained on health records produced by a system that has historically undertreated Black patients or given lower-income populations fewer diagnostic resources may produce predictions that perpetuate those historical injustices, increasing the mortality risk for patients in those groups in part because the system that produced the training data had already failed them. Wilson’s framing is straightforward: she wants to know how the algorithm came to its conclusion and what’s built into it. That is not a matter of bureaucracy. The question is whether the same structural disparities that these tools are meant to help address are being subtly reinforced.
As early as 2021, Stanford HAI issued a similar warning, pointing out that hospital AI tools were “not well documented” and, in the researchers’ words, leaving clinicians in the dark. Models that were currently being used in clinical settings were not providing enough details about their training data, known limitations, or the circumstances in which their predictions might deviate from accuracy. A known risk in clinical AI is calibration drift, which is the slow deterioration of a model’s accuracy as the patient population it encounters differs from the one it was trained on. When a model that was trained on data from 2015 comes into contact with patients whose health profiles reflect conditions that became common in 2022, its predictions gradually lose their accuracy without anyone noticing because the model doesn’t identify its own uncertainty.
The nature of what these predictions actually do to clinical care is worth considering. The goal is for doctors to start end-of-life discussions with patients who may benefit from palliative care after receiving a mortality alert. Research consistently demonstrates that patients want to have these conversations but seldom do, and many doctors find it difficult to start them without a catalyst. That is a legitimate and justifiable good. What occurs around the edges is the matter of concern.
A doctor may decide to treat a patient less aggressively if they are predicted to die soon. It could influence how a case is discussed during team rounds. If the data ever left the hospital, it could have an impact on insurance processing. Pernille Tranberg, a data ethics specialist in Denmark, noted that insurance companies already employ comparable algorithms, categorized by risk, to modify premiums. The researchers behind life2vec explained that their work served as a public counterbalance to the private algorithmic systems being developed by big tech companies, which they claimed were not being discussed in public because they were being used for product sales. This framing, which presents open research as a countermeasure to private power, may be the most crucial aspect of the narrative and the most challenging to swiftly resolve.
This is not a clear-cut solution. The tools have already been put into use. There will be more. In order to allocate scarce resources, such as beds, specialist time, and palliative care teams, hospitals are under pressure to use predictive analytics, and the overall accuracy of AI-generated risk scores is actually helpful for that purpose. Whether or not these instruments are appropriate for use in medicine is not the question. The question is whether medicine has created and is still creating the moral framework necessary to employ them without undermining the patients’ autonomy, trust, and basic dignity while the algorithms are operating.
