Between the COVID-19 pandemic, a mental health crisis, rising healthcare costs, and aging populations, industry leaders are rushing to develop healthcare-specific artificial intelligence (AI) applications. One signal comes from the venture capital market: over 40 startups have raised significant funding—$20M or more —to build AI solutions for the industry. But how is AI actually being put to use in healthcare?
The “2022 AI in Healthcare Survey” queried more than 300 respondents from across the globe to better understand the challenges, triumphs, and use cases defining healthcare AI. In its second year, the results did not change significantly, but they do point to some interesting trends foreshadowing how the pendulum will swing in years to come. While parts of this evolution are positive (the democratization of AI), other aspects come with less excitement (a much larger attack surface). Here are the three trends enterprises need to know.
Gartner estimates by 2025, 70% of new applications developed by enterprises will use no-code or low-code technologies (up from less than 25% in 2020). While low-code has the ability to simplify workloads for programmers, no-code solutions, which require no data science intervention, will have the biggest impact on the enterprise and beyond. That’s why it’s exciting to see a clear shift in AI use from technical titles to the domain experts themselves.
For healthcare, this means more than half (61%) of respondents from the AI in Healthcare Survey identified clinicians as their target users, followed by healthcare payers (45%), and health IT companies (38%). This, paired with significant developments and investments in healthcare-specific AI applications and availability of open source technologies, is indicative of wider industry adoption.
This is significant: putting code in the hands of healthcare workers in the way that common office tools, like Excel or Photoshop, will change AI for the better. In addition to making the technology more accessible, it also enables more accurate and reliable results, since a medical professional—not a software professional—is now in the driver’s seat. These changes are not happening overnight, but the uptick in domain experts as primary users of AI is a big step forward.
Additional encouraging findings involved advances in AI tools and a desire for users to drill down on specific models. When asked what technologies they plan to have in place by the end of 2022, technical leaders from the survey cited data integration (46%), BI (44%), NLP (43%), and data annotation (38%). Text is now the most likely data type used in AI applications and the emphasis on Natural Language Processing (NLP) and data annotation indicate an uptick in more sophisticated AI technologies.
These tools enable important activities like clinical decision support, drug discovery, and medical policy assessment. After living through two years of the pandemic, it’s clear how crucial progress in these areas is, as we develop new vaccines and uncover how to better support healthcare system needs in the wake of a mass event. And by these examples, it’s also evident that healthcare’s use of AI varies greatly from other industries, requiring a different approach.
As such, it should come as no surprise that technical leaders and respondents from mature organizations both cited the availability of healthcare-specific models and algorithms as the most important requirement for evaluating locally installed software libraries or SaaS solutions. As seen by the venture capital landscape, existing libraries on the market, and the demand from AI users, healthcare-specific models will only grow in coming years.
With all the AI progress that’s been made over the past year, it’s also opened up a range of new attack vectors. When asked what types of software respondents are using to build their AI applications, the most popular selections were locally installed commercial software (37%), and open source software (35%). Most notably was a 12% decline in use of cloud services (30%) from last year’s survey, most likely due to privacy concerns around data sharing.
Additionally, a majority of respondents (53%) chose to rely on their own data to validate models, rather than on third-party or software vendor metrics. Respondents from mature organizations (68%) signaled a clear preference for using in-house evaluation and for tuning their models themselves. Again, with stringent controls and procedures around healthcare data handling, it’s obvious why AI users would want to keep operations in-house when possible.
But regardless of software preferences or how users validate models, escalating security threats to healthcare are likely to have a substantial impact. While other critical infrastructure services face challenges, healthcare breaches have ramifications beyond reputational and financial loss. The loss of data or tampering with hospital devices can be the difference between life and death.
AI is poised for even more significant growth as developers and investors work to get the technology in the hands of everyday users. But as AI becomes more widely available, and as models and tools improve, security, safety, and ethics will take center stage as an important area to keep tabs on. It will be interesting to see how these areas of AI in healthcare evolve this year, and what it means for the future of the industry.