Ethical Considerations of Healthcare Related Artificial Intelligence
- S. Paige Carey
- Jul 9
- 5 min read

Powerful Potential
Artificial Intelligence (AI) is being rapidly adopted across many sectors of healthcare. AI algorithms targeting diagnostic accuracy, operational workflows, treatment strategies, patient monitoring have been positively received. Consider, for example, the value of AI’s analysis of a bacterium’s genome to determine antibiotic resistance and sensitivity. This breakthrough allows for reduced broad-spectrum antibiotic use and can potentially decrease mortality by allowing rapid identification and administration of the most appropriate antibiotic(s).
The beneficial opportunities for AI are not limited to diagnostics. Consider also the efficiency and throughput benefits afforded by AI’s ability to analyze a tremendous amount of disparate data in efforts to safely identify patients ready for discharge, or the reduction in documentation burdens afforded by ambient listening AI that generates a note based on a visit with a patient, which a clinician then reviews and edits as necessary before entering it into the chart.
However, in the context of healthcare, for all its promise, the speed of innovation has outpaced the creation of the necessary structure to ensure ethical application. Due at least in part to this lag, biases have already been baked into many AI algorithms and AI-driven products. There are many ways in which bias can be introduced, but two of the largest factors are algorithmic and data biases.
Algorithmic Bias
Algorithmic bias occurs as a result of the methodology employed and purpose of the given tool. It is important to recognize that there are commercially created AI products that impact healthcare but were never intended to help people, increase organizational efficiency, or improve patient outcomes.
Consider the AI claims algorithm used by U. S. healthcare insurers United Healthcare and Humana that has resulted in ongoing (as of May 2025) lawsuits due to alleged inappropriate post-acute care denials of their Medicare Advantage customers. In this particular instance, while usually unintentional, inappropriate conclusions of the AI algorithm, as claimed by plaintiffs in the lawsuit, were allegedly purposefully engineered so as to benefit the companies using it.
Whether intentional or unintentional, the existence of algorithmic bias is further obscured by the fact that reports demonstrating how an algorithm arrived at a given conclusion--in this case denial of an insurance claim--is often not made available to the consumer due to the proprietary nature of many AI algorithms. Beyond that, even if customers do receive a report detailing the rationale for their claim denial, it’s unlikely that most people will be able to interpret the report and effectively articulate a successful appeal.
It is important to point out that the Centers for Medicare & Medicaid Services (CMS) does require that Medicare Advantage insurers are “…making medical necessity determinations based on the circumstances of the specific individual… as opposed to using an algorithm or software that does not account for an individual’s circumstances.”. However, it is perhaps even more noteworthy that responsibility for monitoring compliance with this requirement is placed on the Medicare Advantage organization. In other words, the insurers are responsible for policing their own ethical use of algorithmic software.
Data Bias
AI algorithms are trained using large data sets. Bias, often hidden in that training data, presents a significant threat to the accuracy and trustworthiness of AI. Without very carefully curated training data--and sometimes despite this effort-- AI can mirror, reinforce, and perpetuate historical inequalities.
There are many examples of training data resulting in biased AI algorithms. Outside of the healthcare context, there are clear examples of bias such as men being preferred for high-paying jobs in hiring algorithms and people of color more likely to be associated with crime and are less accurately identified via AI-driven facial recognition software. However inappropriate or unfair, AI bias in healthcare can result in harm. Consider the following examples:
It is widely reported that AI can identify melanoma lesions with astonishing accuracy. However, with only 5-10% of the training images being from people of color, that accuracy is only enjoyed by white patients, since the algorithm is only half as accurate on people of color.
An AI algorithm that is claimed to be able to accurately predict the patient’s 5-year heart attack risk was trained primarily on male data sets. Women are woefully misdiagnosed when it comes to heart attacks due to their differing clinical presentations. Even animal research--data sets from which are used in some AI algorithms--predominantly use male subjects.
Some AI algorithms used to evaluate genetic data and predict predisposition to developing certain diseases were trained primarily on data from individuals of European ancestry.
An insurance-related AI algorithm was trained with historical data sets that happened to include race but was not specifically trained on if or how to incorporate race into the algorithm. As a result, the AI concluded that since less money was spent caring for Black patients than white patients, the white patients must be sicker and prioritized white patients for care management programs over Black patients.
Combating AI Bias
There is a lag in regulatory frameworks and ethical guidelines to support the rapidly evolving AI technoscape. There are a number of things that can be done to increase the fairness of AI algorithms:
Ensure AI algorithms are comprehensible to clinicians and can disclose how they arrived at a given conclusion
Increase capacity for algorithms to disclose the level of confidence in a given result
Make algorithmic bias detection and mitigation programs a requirement for healthcare-related AI
Establish appropriately representative stakeholder groups for AI development and decision-making, including ethicists, clinicians, data scientists, security experts, and patients
Encourage laws that require increased transparency and accountability of healthcare and healthcare-adjacent AI, such as algorithms used in health insurance and commercial health product research (e.g., personal fitness devices, mobile apps)
Conclusion
AI-augmented healthcare is a transformative technological development with potential to positively impact billions of people throughout the world. This technology is incredibly powerful and evolving more quickly than ethical application standards can keep pace with. This delay results in the potential for powerfully biased and unfair AI that could damage trust in AI and even cause physical, mental, and financial harm. Regulatory and ethical guidelines for AI in healthcare must be established and enforced.
References
Hernandez-Boussard, T., Lee, A. Y., Stoyanovich, J., et al. (2025). Promoting transparency in AI for biomedical and behavioral research. Nature Medicine. https://doi.org/10.1038/s41591-025-03680-0
Jang, E. (2025). When faulty AI falls into the wrong hands: The risks of erroneous AI-driven healthcare decisions. International Journal of Communication, 19, 1859–1865.
Lindenmeyer, A., Blattmann, M., Franke, S., Neumuth, T., & Schneider, D. (2025). Towards trustworthy AI in healthcare: Epistemic uncertainty estimation for clinical decision support. Journal of Personalized Medicine, 15(2), 58. https://doi.org/10.3390/jpm15020058
Munir, A., Noor, K., Shams, M. U., Khan, A. T., Noor, S., Jumani, A. A., & Saleh, S. M. (2025). The impact of AI technologies in modern healthcare: A critical analysis of challenges, opportunities of future prospects. The Research of Medical Science Review. Zenodo. https://doi.org/10.5281/zenodo.15221142
Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10), 100347. https://doi.org/10.1016/j.patter.2021.100347
Polevikov, S. (2023). Advancing AI in healthcare. Clinica Chimica Acta, 548, 117519. https://doi.org/10.1016/j.cca.2023.117519
Rajpurkar, P., Chen, E., Banerjee, O., et al. (2022). AI in health and medicine. Nature Medicine, 28, 31–38. https://doi.org/10.1038/s41591-021-01614-0
Tovmasyan, A., Liefgreen, A., Wachter, S., Mittelstadt, B., & Weinstein, N. (2025). Motivating transparent communications about bias in healthcare technology development. Collabra: Psychology, 11(1), 136456. https://doi.org/10.1525/collabra.136456
Assessed and Endorsed by the MedReport Medical Review Board






