Newsletter
Navigating the Risks and Benefits of AI in Clinical Diagnosis
Mar 09, 2025
Artificial Intelligence (AI) technologies can potentially transform health care services. Although numerous questions and risks remain regarding AI’s integration into health care, the potential benefits of optimizing services and mitigating risks are significant. As most health care systems assess the various risks and advantages of safely and ethically incorporating AI into their operations, Candello examines malpractice claims data where AI may have a significant influence: diagnosis.
In a recent Candello webinar discussing AI’s integration into health care services, the presenters discussed how diagnostic services constitute a significant area where AI could reduce patient harm and medical malpractice costs. For example, the national Candello database identified over 12,000 diagnosis-related cases closed between 2015 and 2024, resulting in approximately $4 billion in losses. Cancer diagnosis errors represented a significant subset of these cases, which is a consistent area of focus for patient safety efforts due to their level of patient harm, prevalence, expense, and intractability, in addition to preventing tragic outcomes for patients. The breakdown below demonstrates the scale of the opportunity to reduce losses from cancer diagnostic errors.
These data present areas where AI could have a considerable effect in mitigating claims, especially in radiology and patient assessment issues, such as the failure to reconcile relevant symptoms and test results or the misinterpretation of diagnostic tests.
Outside of diagnosis, AI also presents promise in other key areas, such as:
- Precision medicine tailored to individual patient profiles
- Faster drug discovery and clinical trials
- Automated administrative functions to reduce costs and errors
- Enhanced care processes, such as preventing medication errors
But of course, using artificial intelligence in health care is not without risks. A major concern is the “black box” problem: understanding how deep learning systems make decisions. This opacity can question the reliability and validity of AI recommendations, even as institutions try to keep up with changing policies and procedures.
AI systems are only as effective as the data on which they are trained. If data representing a specific patient demographic category are the primary source used to train an AI tool, its use may be limited in other patient populations. Such inherent biases could lead to erroneous conclusions, inadvertently contributing to misdiagnosis and increasing the likelihood of malpractice claims.
Moreover, determining liability in the event of an AI-related diagnostic error presents a formidable challenge. Traditional malpractice frameworks focus on individual provider actions, but when an adverse event is linked to an AI-generated recommendation, assigning fault becomes complex. This legal ambiguity could complicate the defense of health care providers who rely on AI as a clinical support tool.
Mitigating the Risks of AI
Clear documentation is critical when integrating AI into clinical practice. Providers should be prepared to meticulously record when and how AI recommendations are used, along with the rationale for their clinical decisions. This transparency not only supports informed decision-making but also provides a strong defense against potential malpractice allegations.
Ongoing education and training are equally important. Health care professionals must understand both the capabilities and limitations of AI tools. By fostering a culture of critical engagement with technology, institutions can empower clinicians to determine when to trust an algorithm and when to rely on their expertise.
Finally, collaborative governance and policy development are necessary to navigate the complexities introduced by AI. Establishing AI governance task forces, conducting thorough due diligence on vendors, and developing clear policies and procedures for AI use can help delineate responsibilities among health care providers, institutions, and technology vendors. Regulatory bodies, including the FDA, are actively developing guidelines to ensure that AI tools are safe and effective, thereby supporting best practices in clinical settings.
Artificial intelligence holds transformative potential for health care, offering significant benefits in enhancing patient care and operational efficiency. However, as the technology evolves, a balanced approach that embraces innovation while safeguarding patient safety is imperative. By investing in rigorous documentation, ongoing education, robust validation, and collaborative governance, health care organizations can confidently navigate the evolving landscape of AI while mitigating malpractice risks.
Additional Resources
- Candello Community Platform: Candello Community members can access the educational webinar, “Artificial Intelligence in Health Care: Risks and Benefits”
- Milbank Quarterly: Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation - PMC
- Health Affairs: Artificial Intelligence In Health And Health Care: Priorities For Action | Health Affairs