Motivating luddites toward AI-augmented healthcare

In AI University, Featured, J Crayton Pruitt Family Department of Biomedical Engineering, NewsBy Shawn Jenkins

Parisa Rashidi, Ph.D.

Parisa Rashidi, Ph.D., an associate professor of Biomedical Engineering and inducted fellow of the American Institute for Medical and Biological Engineering (AIMBE)

Renowned UF biomed expert Parisa Rashidi urges Congress to fast-track broader AI

Since Spring 2023 — when several prominent tech thought-leaders issued a public plea for a pause to the progression of certain AI language models, fearing a Pandora’s Box — the U.S. has been agonizing over the rightful place of artificial intelligence in our daily lives. But while Congress and regulatory agencies wrestle with setting policy to avoid unintended consequences, Dr. Parisa Rashidi’s verdict for the healthcare sector is “Let’s go.”

An inducted fellow of the American Institute for Medical and Biological Engineering (AIMBE), Rashidi, Ph.D., an associate professor of Biomedical Engineering in the Herbert Wertheim College of Engineering at the University of Florida (UF), twice presented her expert insights in congressional briefings on regulating use of AI in healthcare last summer.

As co-director of the Intelligent Clinical Care Center (IC3) at UF — a state-approved, cross-campus center that seeks to transform clinical care medicine using immersive and pervasive AI to assist the human-centered healthcare system — she has seen the evidence pointing to the need for unfettered support of medical AI.

“In an acute care setting, like where I’m working in the IC3, the use of AI is expanding and showing great promise for numerous applications like treatment personalization, decision support, surgery, drug discovery, administrative workflow optimization, to name a few,” Dr. Rashidi said. “Healthcare is a very complex ecosystem, and there are many problems that we need to address that require the use of advanced technology.”

Among those problems are inefficiencies and errors. Dr. Rashidi referenced two papers 1 in her congressional hearings that indicated some troubling medical trends: 1) about one-quarter of all healthcare dollars spent in the U.S. is unnecessary; 2) nearly one in four Americans admitted to the hospital will experience one medical complication due to errors in treatment. Couple that with a large domestic population with the modern sedentary lifestyle and a prevalence of disease like diabetes, and the need to use the sophistication of advanced AI to inform public policy becomes evident.

Of primary concern in Dr. Rashidi’s discussion with lawmakers was the necessity of a national data infrastructure to construct reliable AI models for a diverse national patient population.

“To do anything in AI in the medical domain, we need it,” Dr. Rashidi said. “Currently, we don’t have it. Every hospital has its own data repository, and if we’re lucky, some states might have a system like OneFlorida that is linking some clinics and hospitals across the state. But that’s rarely the case across the nation. Academic centers, like UF, can collect this data and build AI models, but what about the rural areas? We don’t have any data for them. And those are built on very different demographics; if we use that in a very different geographical area of population, it’s not going to work. We will be generalizing those models to a different patient population. In terms of education, ethnicity, other social determinates of health, those will be very different.”

Another glaring bottleneck in the full implementation of medical AI is the lack of a legislative framework for clinical AI reimbursement. Along with Dr. Regina Barzilay, Distinguished Professor of AI and Health with MIT’s School of Engineering, Dr. Rashidi drafted a letter for the Centers for Medicare & Medicaid Services to bring attention to how the current national healthcare model could inhibit acceptance of an AI-augmented clinical environment. If doctors are spending less time with the patient (thanks to AI), it translates to a perceived lower effort, resulting in lower reimbursement under many care models. But if AI expedites or facilitates things, it may make the diagnosis more accurate.

“How do we balance this?” Dr. Rashidi said. “There’s an inherent contradiction. It hinders the incentive to employ AI in the clinical setting. You can have the best models, but if the clinicians are financially sanctioned, you have a problem. In reality, the AI will serve as an assistive tool for decision-making, automating some of the routine, non-critical tasks, allowing clinicians to dedicate more time to the human touch of patient care. AI can never replace compassion.”

Dr. Rashidi feels that a leading indicator for the success against the current inertia of AI in the medical space would be “a working AI model in the clinic.” She hastened to point out the long preliminary due diligence of internal/external validation, prospective validation and the various compliance protocols required for widespread adoption. “But, if something is deployed in the clinic, people are using it and both the clinicians and end users are happy, I would consider that as positive,” she said. “The overall question is ‘How can we deploy these models in the future in a safe, robust and secure manner?’”

For their part, Dr. Rashidi said her Capitol Hill audience was ‘very receptive.’

“I was in D.C. twice in the space of two months to meet with them, so it appears it’s an area they were very interested in.”

  1. *Bates, David W., David M. Levine, Hojjat Salmasian, Ania Syrowatka, David M. Shahian, Stuart Lipsitz, Jonathan P. Zebrowski et al. “The safety of inpatient health care.” New England Journal of Medicine 388, no. 2 (2023): 142-153.
    [JAMA-1] Shrank, William H., Teresa L. Rogstad, and Natasha Parekh. “Waste in the US health care system: estimated costs and potential for savings.” Jama 322, no. 15 (2019): 1501-1509. ↩︎

Share