AI is Coming to Your Hospital. Will Clinicians Trust it?

By AAMI

There is “tremendous enthusiasm” about disruptive innovation – as long as it is clinically validated. When it comes to AI in health care, transparency in how these tools are designed and validated will be essential to establishing that trust.

That was the message from the first keynote address at the 2022 AAMI/FDA/BSI International Conference on Medical Device Standards and Regulation, given by Jesse Ehrenfeld, MD, MPH, president-elect of the American Medical Association and cochair of the AAMI AI standards committee on October 18.

“This is a lesson that we have learned across the industry over and over and over, whether it’s electronic health records, digital wearables, other devices, that not every tool, not every device, not every app lives up to its promise. And while digital health and AI tools, in particular, have nearly unlimited potential to change medicine, to change medicine for the better … physicians increasingly understand the stakes, what is ahead of us as we think about using these tools in our practice,” Ehrenfeld said.The regulatory environment, however, has failed to catch up to the pace of change, according to Ehrenfeld.

“We’ve got to make sure that we work together with intentional and great purpose to ensure that patients and their care teams can trust what they’re using,” he said.“There’s not a day that goes by that I don’t see patients, deliver care, where I don’t see opportunities for the technology to do better. And there are real gaps that only will be achieved if we can come together through the standards community to make sure that we’ve got the right tools and technologies to work on,” he added. “Unfortunately, for the patients that we serve, there continues to be a lot of uncertainty around the direction for the regulatory framework for digital health tools and AI in particular … we’ve got to ensure that the regulatory framework only allows safe, high-quality, clinically validated tools to come into the marketplace. And AI simply cannot allow us to lower the quality of the tools that show up at the patient’s bedside, or to introduce bias into the results that we use as practitioners.”

Hospitals, Meet AI

Interest in AI tools among physicians is growing, according to a recent Digital Health Physician Survey from AMA, cited by Ehrenfeld, with adoption of these tools growing by all physicians regardless of gender or specialty, or age of the end user. Plans for adoptions of the most emerging technologies are high, but actual usage remains low. Nearly 1 in 5 physicians is currently using AI for practice efficiencies.

A reported 2 out of 5 want to adopt them in the next year.

Nearly 3 in 5 physicians think these technologies can help them in key areas like reducing the impact of chronic disease.

“How clinicians integrate these things into the delivery models of the future is going to be dependent though on a lot of other factors like cost of their clinical practice, having an appropriate evidence base to support the use for an individual patient. And those are messages that come through loud and clear in our digital health work,” Ehrenfeld said. “Improving clinical outcomes and improving efficiency of the workflow are the key drivers of physician adoption. But, increasingly, coverage for issues like malpractice insurance continues to be an important requirement across all physicians in our surveys.”

Digital technologies, wearables and AI offer “almost limitless potential” to transform the health care delivery system, both for clinician practice as well as for the patience experience.

“But without direct input from the physician community throughout the design life cycle, we will see that these technologies will fail to deliver. Or even worse, they’ll further complicate health care or actually impede the delivery process for patients. We ought to be working towards the quadruple aim, improving patient care [and] clinical well-being, lowering health care costs, and having better outcomes for patients,” Ehrenfeld said. “And I would say that every single new healthcare technology ought to be designed to accomplish at least one of these goals, hopefully more.”

In addition, when an AI system is added to a medical workflow, those working in it must be made aware of the AI and properly trained in it. Ehrenfeld cited the tragic loss of hundreds of lives due to the Boeing 737 Max airliners that crashed in 2019 because of automated technology that the pilots were unaware of.

“We have to know that the AI is there. We’ve got to have training about the expected function and anticipated dysfunction of the system. People working with AI-enabled tools and devices can only supervise and correct them if we know that the AI is there and what the outputs are that it’s producing. And I will tell you that it’s been well documented that the flight crews of those two June airliners did not know about this new AI system on their aircraft. It was not in their operations manuals. There was no specific training. You simply cannot repeat that same mistake in health care,” he said.

The sensitivity around trust in technology places us at a “critical juncture” when it comes to establishing trust at a time when skepticism in science, regulation and government agencies is high. In such an environment, expertise may even be deliberately ignored.

“We cannot allow any openings in the regulatory framework or in the product development life cycle to exacerbate this new global problem,” Ehrenfeld said. “When I look out at the current development and implementation landscape for new digital tools [and] AI-enabled technologies, where it can be much more difficult to understand when an algorithm is working as expected and when it is not, as a community, a collective community of developers, regulators, and users, we’ve got to ensure that we do not lose the public trust, the confidence of the public in the tools that are coming into the marketplace.”

Previous

Next

Submit a Comment

Your email address will not be published. Required fields are marked *

X