Home » Technology » Making AI Work for Health Care: Stanford’s Framework Evaluates Impact

Making AI Work for Health Care: Stanford’s Framework Evaluates Impact

Navigating the AI Revolution in Healthcare: Stanford Offers a New Framework for Success

The rise of artificial intelligence (AI) in healthcare is creating both excitement and uncertainty. While promising quicker diagnoses, reduced workloads for physicians, and improved hospital operations, questions linger: Can these tools deliver on their promises? How can hospitals ensure they are choosing the right AI solutions for their specific needs?

Enter FURM, a groundbreaking open-source framework developed by scholars at Stanford University. Designed to help healthcare systems navigate the complex landscape of AI implementation, FURM empowers hospitals to assess the true value of these technologies.

Currently in use at Stanford Health Care, FURM has already proven its worth in evaluating a range of AI tools, from early detectors for peripheral arterial disease to risk prediction models for cancer patients.

"One key insight we’ve gained on our campus is that the benefit of any AI solution is directly tied to how it fits within existing workflows and whether we have the time and resources to actually use it in a busy healthcare setting," says Dr. Nigam Shah, professor of medicine and biomedical data science at Stanford, and the framework’s co-creator.

What sets FURM apart is its focus on practicality and real-world impact. Shah points out that while other developers and scholars are working on ensuring AI is safe and equitable, a crucial gap remains in assessing a technology’s true usefulness and feasibility within a specific healthcare system. What works in one hospital might not translate well to another.

A Three-Step Process for AI Evaluation

FURM operates on a three-pronged approach:

  1. Defining the "What" and "Why": This stage delves into understanding the problems the AI model is designed to solve, how its output will be used, and its potential impact on patients and the healthcare system. Financial sustainability and ethical considerations are also carefully assessed.

  2. Ensuring Usability: This step examines the real-world feasibility of integrating the model into existing workflows. Can it be seamlessly incorporated into daily operations without disrupting existing processes?

  3. Measuring Impact: FURM emphasizes both initial verification of benefits and ongoing monitoring of the model’s performance once it’s live. This allows hospitals to track the AI’s effectiveness and make necessary adjustments.

According to Dr. Shah, "FURM can help healthcare systems focus their time and resources on technologies that are truly valuable, rather than experimenting with everything and hoping something sticks. This can prevent what we call ‘pilotitis,’ where hospitals get caught in a cycle of pilots that ultimately lead nowhere."

He adds, "It’s crucial to consider the scale of impact: an AI model might be excellent, but only beneficial for a small group of patients."

Beyond the Bottom Line: Integrating Ethics into AI Implementation

Beyond practical considerations, FURM also takes a proactive approach to ethical implications. Michelle Mello, professor of law and health policy at Stanford, highlights the importance of addressing ethical challenges before they arise.

Mello, along with Danton Char, associate professor of anesthesiology, perioperative and pain medicine, and empirical bioethics researcher, developed the ethical assessment component of FURM. Their work focuses on helping hospitals develop strong processes for monitoring the safety of new tools, determining how to disclose AI-driven decisions to patients, and minimizing potential healthcare disparities that AI could exacerbate.

Democratizing AI Access and Evaluation

Dr. Sneha Jain, clinical assistant professor in cardiovascular medicine at Stanford and co-creator of FURM, emphasizes the need for broader accessibility.

"Our goal is to democratize robust yet feasible evaluation processes for these tools and associated workflows to improve the quality of care for patients across the United States, and hopefully globally," she says.

Jain is spearheading the development of the Stanford GUIDE-AI lab, which stands for Guidance for the Use, Implementation, Development, and Evaluation of AI. The lab aims to constantly refine AI evaluation processes and make them accessible to hospitals with limited resources.

Mello and Char are similarly working on making ethical assessments more widely available through funding from the Patient-Centered Outcomes Research Institute and Stanford Impact Labs.

Looking ahead, the Stanford team is committed to adapting FURM to the rapidly evolving landscape of AI, including generative AI.

"If you develop standards or processes that aren’t practical for people to use, they won’t be used. A key part of our work is figuring out how to implement these tools effectively, especially in a field where everyone is pushing to move quickly," Mello concludes.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.