BY DR. HEAVEN PROVO, MBA, DNP, RN, CCM, PHNA-BC, NE-BC, LNFA, CAHIMS, FCM
For some time, healthcare has been familiar with deep learning as a frontier for AI. Deep learning has been successful in a wide variety of applications, such as natural language processing and speech recognition. Artificial intelligence (AI) is reshaping healthcare by enhancing diagnostic accuracy, streamlining administrative processes, and enabling personalized care. Many have questioned how it really fits into case management (CM) and utilization review (UR). These practice arenas are instrumental to patient outcomes and healthcare efficiency. Yet, the integration of AI into these domains raises complex ethical questions. As with any intervention, this is not a one-size-fits-all approach. The ethical use of AI in healthcare, particularly CM and UM, emphasizes transparency, fairness, safety/ accountability and the indispensable role of human professionals (professional case managers) to accompany AI systems. These tenets are aligned with the four key clinical principles of ethics, including beneficence (acting for the benefit of the patient), nonmaleficence (doing no harm), autonomy (informed decision making), justice (providing equitable care), and integrity (honesty). While automation and use of AI tools can reduce administrative burden, it also introduces ethical risks if decisions are made without adequate clinical context (Ratti, E., Morrison, M. & Jakab, 2025).
At its core, AI systems rely on vast amounts of patient data, raising concerns about privacy, consent and data security. Given the consumption of patient health data, it is important to keep HIPAA and privacy concepts at the forefront. Generative AI, for instance, can create synthetic medical records or simulate patient interactions, raising questions about authenticity and accountability. The integration of AI with wearable devices and remote monitoring tools also introduces concerns about surveillance and autonomy (Griot & Walker, 2025). In case management, where sensitive information about a patient’s medical history, social determinants of health and behavioral patterns is analyzed, the risks are particularly high (Dankwa-Mullan, 2024). AI has been supported in CM where predictive or other risk-based models support care delivery. It has been used to identify high-risk patients, predict hospital readmissions, and coordinate care. Patients must be informed about how their data will be used, who will access it and the potential risks involved. Consent should be dynamic and revisited as AI applications evolve. Moreover, patients should also be involved in decision-making processes, ensuring that care remains person-centered. Involving patients in the design and development of transparent systems support informed consent and foster trust (Dankwa-Mullan, 2024).
Data drives decisions. As wonderful as the data points might be, they should include full consideration for the lives that they linked to. AI must be developed with an equity lens. This includes engaging diverse thought leaders, including patients from marginalized communities, ensuring a balanced population mix when testing models, and maintaining a strong ethical lens in the overall design and evaluation of AI systems. A key ethical principle in AI deployment is “keeping humans in the loop.” “Humans can provide training data for machine learning applications and directly accomplish tasks that are hard for computers in the pipeline with the help of machine-based approaches. As it relates to data, the efforts of the human can be classified into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent workflows” ( Wu, et. Al, 2022). From a person-centered perspective, AI should augment, not replace, human professionals. In both case management and utilization review, human oversight ensures that AI recommendations are contextualized and ethically sound. AI tools are not a means to replace the team. Rather, they are a complement to daily work and supportive tasks. AI is a methodology to enhance the human element by saving hours on end of data mining and/ or chart review. This tool enhances human performance and efficiency. This can lend itself to quality evaluation and a multitude of use cases. Case managers and UR teams must be trained to critically evaluate AI recommendations and integrate them with holistic assessments.
Optimized AI models should serve as a tool and work in tandem with individuals and teams delivering person-centered care. If not properly socialized with care teams, AI systems can feel like “black boxes.” They are seemingly abstract, intangible and overall difficult to understand in terms of how they work, how the information comes together and ultimately how decisions are made. Explainability is essential for ethical AI. Clinical teams can be particularly reluctant to lean into models that feel like the “man behind the curtain.” Without transparency, patients and providers may be unable to comprehend or even challenge these decisions appropriately (Ratti, E., Morrison, M. & Jakab, 2025). Providing a meaningful construct and overview to the teams applying the models to their work can have a significant impact on receptivity and adoption. Companies or organizations may have a “proprietary algorithm,” but it is important to inquire what aspect(s) can be shared to aid in understanding. If designed and deployed ethically, AI can identify gaps in care, target interventions and promote equity.
Effective governance is essential for the ethical deployment of AI in healthcare. This includes not only compliance with existing laws but also the development of new regulations tailored to AI’s unique challenges. Ideally, policies should mandate equity impact assessments and require that AI tools be evaluated for their effects on different population groups (Dankwa-Mullan, 2024). To ensure fairness, developers must use diverse and representative datasets and must be accountable for the integrity and performance of their systems. To promote social justice and equity, bias audits should be conducted regularly, and models should be tested across demographic groups. However, if biased or poorly regulated, it can reinforce systemic inequities. The technical designers creating the algorithms must be careful not to introduce bias or disproportionate care. Feedback from subject matter experts and end users provide key insights into the designs. Ethical AI also requires mechanisms for redress when harm occurs due to biased decisions (ANA, 2025) If not properly managed, AI systems can lead to unequal treatment and exacerbate existing health disparities. It has the potential to either bridge or widen health disparities. Regulatory frameworks should address transparency, accountability, bias mitigation and patient rights (Ratti, E., Morrison, M. & Jakab). Recent efforts by public health agencies emphasize the need for clinical oversight, transparency in decision- making and protections against discriminatory practices. International collaboration is also important to harmonize standards and ensure consistency across borders (Dankwa- Mullan, 2024). As AI technologies evolve, new ethical challenges will emerge.
Polevikov (2023) cites 12 key aspects of AI in clinical practice:
Ethical AI
Explainable AI
Health Equity and Bias in AI
Sponsorship Bias
Data Privacy
Genomics and Privacy
Insufficient Sample Size and Self-Serving Bias
Bridging the Gap Between Training Datasets and Real-World Scenarios
Open Source and Collaborative Development
Dataset Bias and Synthetic Data
Measurement Bias
Reproducibility in AI Research.
In parallel, WHO (2021) identified six core principles for AI in healthcare:
Protect autonomy
Promote human well-being, human safety, and the public interest
Ensure transparency, explainability and intelligibility
Foster responsibility and accountability
Ensure inclusiveness and equity
Promote AI that is responsive and sustainable.
As each of the elements is carefully explored, they link to important decisions which can have significant bearing on patient outcomes. AI systems can play a pivotal role in making these technologies more acceptable and aligned with patient values. This approach reduces the risk of a mismatch between the capabilities of deployed solutions and the expectations of patients, ensuring that the systems are tailored to clinical practice needs (Griot & Walker, 2025).
As we move outward from our patient-centered model, we broach the supporting elements of their care. The healthcare revenue cycle has a strong use case for AI if properly vetted. Large language models (LLMs), such as ChatGPT, are advanced machine learning (ML) algorithms which are designed to interpret and generate text. This text can be used to close gaps on the front end (pre certification), middle revenue (level of care), and back end (denials modeling & management including appeal letter generation). When paired with human oversight, the synergy can create workflow optimization and maximum efficiency to promote optimal revenue capture. Utilization review involves evaluating the necessity, appropriateness and efficiency of healthcare services as part of the revenue cycle. AI is increasingly used to automate this process, particularly in prior authorization decisions. Biased algorithms may systematically deny care to certain populations based on flawed or incomplete data. For example, if an AI model is trained predominantly on data from commercially insured patients, it may not perform well for Medicaid recipients or underserved communities (Ratti, E., Morrison, M. & Jakab, 2025). Ethical frameworks should also address the potential for AI to be used primarily as a cost-containment tool, rather than a means to improve care quality (Dankwa-Mullan, 2024). Appeals processes should be accessible and timely, allowing patients and providers to challenge decisions. The human element is important to ensure accuracy, regulatory compliance and adherence to the payer policy, which could be misled by the algorithms alone. Validation of the output prior to submission can reduce delays or errors in the submission process. Ethical AI systems must incorporate robust data governance, including encryption, access controls and regular audits. Clear lines of accountability must be established. Healthcare providers and insurers should retain ultimate responsibility for decisions, even when assisted by AI.
The ethical use of AI in case management and utilization review requires a careful balance between innovation and responsibility. While AI offers powerful tools to enhance efficiency and improve outcomes, it must be deployed in ways that respect patient autonomy, ensure fairness and uphold accountability. Ethical frameworks for AI tools used in UR and case management provide interpretable outputs, and their decision-making logic is documented. Over-reliance on AI may lead to decisions that overlook individual preferences, cultural factors or psychosocial needs (Dankwa-Mullan, 2024). Human professionals must remain central to the process, providing empathy, judgment and ethical oversight that AI cannot replicate. By embedding ethical principles into the design, implementation and governance of AI systems, we can harness their potential while safeguarding the rights and dignity of all patients. Ongoing research is encouraged to continue developing adaptable ethical frameworks that can keep pace with technological change. Interdisciplinary collaboration—among ethicists, clinicians, technologists and patients—will be key to navigating these complexities. Additionally, continuous monitoring and systematic feedback loops should be built into AI systems to ensure they remain aligned with ethical standards. Regulatory bodies should also play a role in certifying and monitoring AI tools used in case management and UR (Ratti, E., Morrison, M. & Jakab).
References
ANA Ethics Advisory Board. (2025, May 3). ANA position statement: The ethical use of artificial intelligence in nursing practice. OJIN: The Online Journal of Issues in Nursing, 30(2). https://ojin.nursingworld.org/MainMenuCategories/Ethics/ANA-Position-Statement- The-Ethical-Use-of-AI-in-Nursing-Practice.html
Dankwa-Mullan, I. (2024). Health equity and ethical considerations in using artificial intelligence in public health and medicine. Preventing Chronic Disease, 21, 240245. https://doi.org/10.5888/pcd21.240245
Griot, M., & Walker, G. (2025). A patient-in-the-loop approach to artificial intelligence in medicine. JAMA Network Open, 8(6), e2514460. https://doi.org/10.1001/jamanetworkopen.2025.14460
Ratti, E., Morrison, M., & Jakab, I. (2025). Ethical and social considerations of applying artificial intelligence in healthcare—a two-pronged scoping review. BMC Medical Ethics, 26, 68. https://doi.org/10.1186/s12910-025-01198-1
Polevikov, S. (2023). Advancing AI in healthcare: A comprehensive review of best practices. Clinica Chimica Acta, 548, 117519. https://doi.org/10.1016/j.cca.2023.117519
World Health Organization. (2021). WHO issues first global report on artificial intelligence (AI) in health and six guiding principles for its design and use. https://www.who.int/news/item/28-06-2021
Varkey, B. (2021). Principles of clinical ethics and their application to practice. Medical Principles and Practice, 30(1), 17–28. https://doi.org/10.1159/000509119
Wu, X., Xiao, L., Sun, Z., Zhang, J., Ma, T., & He, L. (2022). A survey of human-in-the-loop for machine learning. Future Generation Computer Systems, 135, 364–381. https://doi.org/10.1016/j.future.2022.05.014
Dr. Heaven Provo, MBA, DNP, RN, CCM, PHNA-BC, NE-BC, LNFA, CAHIMS, FCM, has over 20 years in the healthcare arena. She holds a Doctorate in Nursing Practice and a dual master’s in business & nursing, accompanied by multiple board certifications. Heaven has worked across the care continuum to support innovation. She currently serves as a subject matter expert with LongTail. Her objectives focus on improving patient outcomes and engaging care teams to become change agents in their respective care settings. She has been responsible for leading operations within health plans, large medical systems, accountable care organizations (ACO), and rural community hospitals with excellent results. Her initiatives have improved clinical targets, utilization and care delivery metrics (i.e., readmission reduction, quality measures/care gap closure, and improved throughput while mitigating avoidable costs). Dr. Provo prides herself on facilitating excellent patient care and inspiring the next generation of nurses through innovation, technology, and evidence-based care delivery.
Image credit: ISTOCK.COM/PANYA MINGTHAISONG
The post The Ethical Use of Artificial Intelligence in Healthcare: Integrating Case Management, Utilization Review, and Human Collaboration appeared first on Case Management Society of America.
Source: New feed