Request Username
Can't sign in? Forgot your username?
Enter your email address below and we will send you your username
Funding for digital health soared at the beginning of the COVID-19 pandemic, breaking funding records only midway through 2021. The pandemic demonstrated how critical data and digital health were to enable healthcare providers to react quickly and nimbly. However, 2022 is already seeing a course correction and digital health companies will need to work to show their value.
Interest in artificial intelligence (AI) continues to rise: Recent research indicates that the majority of healthcare providers and payers have or plan to implement AI tools within the next three years and 63% of surveyed provider organizations have positive expectations for AI. To match this growing demand, many AI software developers also report plans to add healthcare AI tools to their product offerings.
However, expectations for a transformative change with AI have recently tempered, with only 19% of provider organizations expecting AI to be “transformational or essential to their organizations”—a decrease from 37% in 2018.
Our work shows that several key challenges are limiting wider adoption, including attaining stable funding for new technologies given the complexity of healthcare payment models, attaining patient and clinician trust, navigating regulatory approval, reducing workflow and interoperability barriers, and demonstrating return on investment.
To help AI vendors better understand these issues, this article describes key themes AI vendors should consider when developing AI products targeting the healthcare industry. This work is informed by research that our team at the Duke-Margolis Center for Health Policy conducted with 37 health system leaders, information technology specialists, clinicians, and AI vendor executives. Our project findings focused specifically on clinical decision support (CDS) software but can also be applicable to AI more broadly.
Health systems differ widely in who they treat, how they are compensated for care, and what their priorities are. These differences exist both across organizations and within a single institution (e.g., between clinical departments). Accordingly, vendors should consider how the needs of certain clinical departments may differ from the overall health system and how these differences shape the demand for AI products.
At the department level, the demand for an AI product will be driven by the clinical needs and resources of that department. Revenue-generating departments, such as neurosurgery and cardiology, may be more likely to have the resources to invest in AI tools for their respective department. At a department level, a priority may be to more efficiently manage triage of common symptoms via image-processing and diagnostic AI software (e.g., detecting pulmonary nodules, estimating intracranial hemorrhage risk).
Health systems also value algorithms designed specifically to reduce medical errors or uncertainty, rather than confirming what they already know. For instance, one health system developed a tool to assess whether COVID-positive patients presenting in the emergency department were at risk for further deterioration. The tool was designed for accuracy in the small percentage of cases in which physicians were uncertain about the appropriate treatment pathway.
In contrast, the application of AI at the health system level often focuses on systemwide impact. At this level, AI tools are commonly used to reduce cost and increase quality across the organization’s patient population. Examples include AI products that can positively affect Centers for Medicare & Medicaid Services (CMS) quality metrics, patient satisfaction, administrative processes, or patient scheduling. As a result, AI tools prioritized at the health system level trend toward general population health interventions, such as stratifying patients based on readmission risk or predicting which patients might acquire a hospital infection.
Health systems also prioritize AI tools shown to work with their specific patient population. An AI tool may be effective in one setting but less effective when applied to a different patient population with different clinical needs, workflows, insurance coverage, and data systems. To ensure the product is “generalizable,” AI vendors should ensure the training data is chosen to include broad diversity across demographics, social determinants of health, geographic regions, and health system types. Alternatively, vendors can plan to “tune” systems during implementation at each site, though regulatory considerations for software classified as a medical device may exist.
The specific value proposition of an AI-enabled CDS software product will be determined in part by the payment model in which a health system operates. Some health system payment models realize greater financial and clinical value by focusing on upstream activities (e.g., reducing unnecessary emergency department visits), while others will receive more direct value through downstream activities (e.g., mitigating hospital-acquired infections). Understanding the financial underpinnings of the health system can help vendors ensure their AI products offer both clinical and financial value to a specific health system.
The predominant mode of health system payment in the United States is fee for service (FFS). Under FFS, health providers are reimbursed for each procedure or service rendered. The structure of FFS is inherently volume based, where more services result in additional payments regardless of whether the care leads to improved health outcomes. Accordingly, health systems adopting an AI product in a FFS payment model will find value in AI products that are considered discrete billable services.
A recent example is IDx-DR, a diagnostic tool for diabetic retinopathy. In 2020, CMS announced coverage of the IDx-DR AI-enabled tool through the first Current Procedural Terminology code for autonomous analysis of eye exams. This coverage enables physicians to charge Medicare for the use of an AI product on its beneficiaries. That said, few examples of direct reimbursement of AI exist.
Other ways for AI to be profitable in FFS environments include speeding up care processes for health systems to see more patients or identifying conditions that health systems may otherwise miss.
Diagnosis-related group (DRG)-based payments are another commonly used activity-based payment type, particular for inpatient care. DRGs are a standardized fixed payment for hospital services based on patient groups (classified by clinical characteristics and resource usage). Although DRGs are still tied to the volume of services rendered, they act as “small bundles” that cap the amount hospitals receive for a combination of services and products. This cap creates an incentive to reduce costs, as hospitals benefit if actual costs are lower than the fixed DRG payment. In this payment arrangement, AI products can achieve savings by improving clinical and administrative efficiencies, such as faster diagnoses, improved patient triaging, or enhanced care coordination.
AI products may also be eligible for additional reimbursement through CMS’s new technology add-on payment (NTAP) designation, which provides supplemental payments for certain high-cost therapies that exceed the Medicare DRG payment amounts. Two software developers, Viz.ai and Caption Health, have been granted NTAP status, though the NTAP adjustment may not fully cover costs and is time limited to three years, after which there is no guarantee that the DRG payment amount will be adjusted to account for the AI software.
Although FFS payment models remain the dominant payment method in the United States, health systems are increasingly shifting to “value-based” payment models. Under these arrangements, providers are accountable for the health of their patients measured against the cost of delivering care. Common examples include bundled payments (i.e., a fixed reimbursement for services offered during a defined clinical episode of care or time range) and global capitation (i.e., a fixed payment covering most services rendered to a defined patient population over a defined time range). These payment models both encourage and provide greater financial flexibility for providers to focus on improving clinical quality, improving patient satisfaction, and reducing unnecessary care.
Accordingly, health systems operating in value-based payment (VBP) models may find greater utility applying AI models that can predict disease incidence, reduce low-value procedures, or improve population health outcomes. According to a health system deploying an AI stroke detection tool within their value-based care model, the additional benefit of being at risk is its ability to quickly treat strokes, reducing the likelihood of long-term disability and the associated costs for which they would be responsible. Providers in VBP arrangements also have greater financial flexibility to allocate resources, which may incent AI adoption.
AI has the potential to be highly impactful in healthcare—but only if it can be adopted and implemented in a cost-effective way. As a result, vendors need to tailor their products to the unique challenges and payment methods of the health system.
Currently, most health systems do not have standardized approaches to identify and assess AI products. Health systems often report lacking the time, capabilities, and resources to effectively evaluate pitches from AI vendors. Our anecdotal findings suggest that few health systems have established formal protocols to test AI products and ensure their accuracy and reliability. Instead, the pathway to adoption is often driven by a trusted “clinical champion” within the health system who advocates for the AI product.
Vendors with whom we spoke that established successful partnerships with health systems overcame these challenges through similar approaches.
First, vendors concentrated on fostering trust. Common methods included committing to sharing technical expertise throughout the duration of the contract, codeveloping the AI product with the health system customer, and communicating a shared vision around what the AI product could achieve. As one system noted, when they engage with external vendors, it is critical that their values are aligned and “that they understand that same set of concepts.”
Food and Drug Administration (FDA) approval or clearance affords an additional value proposition for CDS software. However, not all CDS software qualifies as a medical device. Although CDS software products can be brought to market more quickly if they do not require FDA approval, health systems trust FDA as an independent reviewer. In the absence of FDA authorization, AI products benefit from alternative validation of performance results, such as a trusted third-party reviewer.
Second, successful vendors were attentive to the direct and indirect costs of AI implementation. For instance, integrating the product into a health system’s existing data infrastructure can be resource intensive, requiring both upfront preparation and continuous maintenance. AI vendors can address these concerns by providing technical assistance to offset implementation costs.
In one example, a radiology department working with an AI developer noted that a developer “put a lot of time and money” into running a proof of concept.
“We were running their algorithms for about a year and having weekly meetings, and they were tweaking things and they never paid them a dime,” said a member of the radiology department. “And so, we had fully integrated the product without it costing anything. The information technology people had embedded all the security and they had run our own data to validate the value of the algorithms, so by the time I went to ask for funding, there were no unresolved issues on the table.”
In some instances, AI adoption also may require changes to the clinical workflow. “It's not like you sprinkle the AI dust on it and it will magically solve your problems,” said one data executive for an integrated health system. “It's going to be like 10% data science and 90% sociology and change management and workflow redesign.”
With this in mind, AI vendors should understand typical workflows and design products that, as much as possible, give actionable outputs at the right time to the right person.
Third, vendors should be able to clearly articulate the long-term value proposition of their product for the specific patient population of that health system. Examples include demonstrating how the product could achieve longitudinal quality improvements (e.g., early diagnosis and accurate disease prediction to prevent increased complications and reduce unnecessary healthcare utilization), increased provider and patient satisfaction (e.g., reducing administrative inefficiencies, scheduling challenges, and patient wait times), workflow efficiencies, or cost reductions.
As more systems become familiar with the challenges and benefits of AI and begin to develop standardized assessments for their AI investments, the best-positioned vendors will be those that can clearly articulate their value pathway.
Although the healthcare industry as a whole has been optimistic of AI’s ability to improve quality, efficiency, and reduce costs, challenges for the increased implementation of AI products remain. Our findings indicate that AI vendors should be mindful of the different priorities of health system implementors, payment models that affect the financing or implementation of software, and the ability to stand out from a competitive crowd of AI products.
AI has the potential to be highly impactful in healthcare—but only if it can be adopted and implemented in a cost-effective way. As a result, vendors need to tailor their products to the unique challenges and payment methods of the health system.
This work described in this article was funded by the Gordon and Betty Moore Foundation.
The authors thank Mark McClellan (director at the Duke-Margolis Center for Health Policy) and Susan Dentzer (former senior policy fellow at the Duke-Margolis Center) for providing strategic feedback and Isha Sharma and Elizabeth Singletary (former colleagues at the Duke-Margolis Center) for their contributions to the work that produced this article. We also thank the health system leaders and AI experts who graciously agreed to discuss their experiences with us.