Artificial intelligence has a way of capturing public interest. Newspaper headlines and startup pitches alike present visions for the future that seem, at times, only a step removed from science fiction. Take the aviation sector as an example. Last year, writers for Bloomberg reported that multiple companies — from industry powerhouses like Boeing to tiny startups like M2C Aerospace — were working to address a pilot shortage by creating semi-autonomous flight systems. Their efforts orient AI as a revolutionary force that will redefine how consumers, investors, and industry professionals alike engage with aviation travel.

 

Articles like these are exciting, controversial, and inspire conversation. However, in illustrating ambitious visions for the future, they can often overlook the significant role that AI already plays. To return to aviation as an example — both Delta and American Airlines rely almost entirely on AI-powered tools to coordinate bookings and manage cancellations. Time-consuming tasks that used to require human intervention are now quietly facilitated by digital assistants and machine learning software. These pragmatic tools don’t earn headlines, but they do have value to those in the field. According to a recent global forecast, analysts expect that the industry’s investment in artificial intelligence will grow to $2.2 billion by 2025

 

Self-piloting technology may generate conversation, but it won’t be nearly as pervasive or useful as the humble digital booking assistant. Put more generally: The amount of hype that an AI innovation receives may not correlate to its actual usefulness or success in a given industry. 

 

A similar pattern has begun to emerge in healthcare. Over the past few years, AI has received considerable media and investor attention. According to recent CB Insights data, companies in healthcare AI received $5.8 billion in equity funding from the first quarter of 2012 through the first quarter of 2019. Federal regulators have shown similar interest. Since 2013, the FDA has fast-tracked review for specific categories of health-centered AI services and, as analysts for the CB Insights report describe, “opened commercial pathways” for more than 70 AI-focused imagining and diagnostic startups. 

 

Many of these equity-funded companies have both ambitious plans for AI technology and investor support. Israel-based startup Sight Diagnostics, for example, recently built a desktop machine that uses AI technology to quickly perform blood counts in situ by analyzing samples of a patient’s blood. Project proponents believe that the tool, dubbed OLO, will empower doctors to gain the information they need to suggest a course of treatment in a single office visit. The company’s work has proven compelling to investors, as evidenced by the $27.8 million Sight Diagnostics raised during a recent Series C funding round. Similarly, the startup XtalPi earned significant media attention after forging multi-million dollar partnerships with several major pharmaceutical companies on the strength of its AI-powered drug design technology. 

 

However, while ambitious startups like the three mentioned above tend to receive a lot of investor and media attention, their presence of the AI-centric healthcare market is relatively tiny — and overly-ambitious AI tools can backfire on their users if they are not implemented thoughtfully. 

 

Consider Watson for Oncology’s failure to launch as an example. In 2012, doctors at the Memorial Sloan Kettering Cancer Center agreed to work with IBM to train Watson to diagnose and develop treatment plans for patients. However, physicians soon began to complain that Watson gave poor advice. In one notable case reported by the Verge, the AI tool suggested that a patient experiencing severe bleeding should take a drug that would worsen the bleeding. Fortunately, the patient was hypothetical, and no one was harmed. However, Watson’s inaccuracy did lead many participating doctors to criticize or reject the technology. 

 

Their attitude is understandable, if not entirely fair. When Watson was brought to Memorial, doctors were asked to input real patient data into its system. Doing so would allow the AI to synthesize massive amounts of information and, from those cases, offer new insights for treatment. However, because the tool’s guidelines changed often enough to make updating real cases difficult, scientists began using hypothetical examples. The AI tool’s user-unfriendly design led to poor data collection. This, in turn, led to poor results, which resulted in user rejection. Watson for Oncology proves that execution matters just as much as potential; even the most useful tools will fall flat if its target audience doesn’t understand how to use them. 

 

But the real problem this failed AI launch illustrates isn’t one of efficacy, but implementation. The Watson project is very nearly a case study of how not to introduce and inspire interest in a new technology. Deployment is as much — if not more so — about marshaling human support and building allowances for the technology into an organization’s operations and problem-solving mindset as it is about the technology itself. 

 

As McKinsey partner Brian McCarthy wrote in a recent cover article for the Harvard Business Review, “While cutting-edge technology and talent are certainly needed, it’s equally important to align a company’s culture, structure, and ways of working to support broad AI adoption. But at most businesses that aren’t born digital, traditional mindsets and ways of working run counter to those needed for AI.” 

 

According to McCarthy, organization that use AI to their maximum potential have three primary characteristics. First, they have cross-functional teams that encompass a variety of perspectives and skill sets and can, collectively, speak to whatever organization or technological problem that might arise during deployment. Users should not be left to fumble through a roll-out without support and conversation like those at Memorial. 

 

Second, people must be able to trust in the suggestions that an AI-powered tool provides and act on them. If users need to continually check the tool’s conclusions or check with a supervisor before doing so, the sheer inefficiency of the experience will inhibit use. Similarly, needing to input hypothetical information for the sake of contributing data served to slow doctors’ interest in using Watson for real cases. 

 

Lastly, all AI tools need to be rolled out within the context of an experiment-centric mentality. If, as in Watson’s case, users believe that the tool is complete and shouldn’t have UX or accuracy problems, they will inevitably become disappointed in and disillusioned by the technology. Thus, those implementing the tool will lose out on potentially invaluable constructive feedback as users dismiss it as useless or ineffective. 

 

Often, massive rollouts of overhyped, care-centric AI tools don’t work because there is a fundamental misalignment between the technology’s capabilities and its users’ expectations for or interest in the tool. If rollouts aren’t engineered to include a developmental, collaborative, and human-centric approach to implementation, they will inevitably fall flat. 

 

The Reality of AI Usage in the Health Sector

 

The tools that have become the face of AI in healthcare are considerably more pragmatic — if less interesting in a headline. According to a recent report from Accenture, the widespread adoption of certain clinical health AI applications could create up to $150 billion in annual savings in the U.S healthcare economy by 2026. Highlights of these projects include administrative workflow assistance, which would provide benefits of $18 billion, as well as the $17 billion saved by AI innovations in fraud detection. The healthcare sector would save $20 billion in virtual nursing assistants, $16 billion in dosage error reduction, and $5 billion in preliminary diagnosis. 

 

Rather than pursue ambitious applications, these applications would seek to lessen the administrative burden, improve efficiency, and support doctors in their day-to-day work. Such solutions may not be quite as flashy or impressive as using AI to facilitate on-the-spot blood tests or design new drugs, but they do provide real, pragmatic solutions to the inefficiencies and administrative slowdowns that trouble the healthcare sector today. The potential for savings has prompted industry action: at the 2018 Healthcare Information and Management Systems Society Global Conference, athenahealth, Cerner, Allscripts, eClinicalWorks, and Epic all announced their plans to integrate AI into the workflow for their respective electronic health records (EHR) platforms. 

 

One administration-centric program that has received widespread attention is Suki, an AI-powered digital scribe that can integrate with various electronic health record systems. The tool listens to conversations between doctors and patients to create accurate clinical notes in real time and continually updates its knowledge bank to learn its user’s unique speech patterns, accents, workflows, and protocols. 

 

The potential value that Suki offers is enormous. Because the tool allows providers drastically cut down on the amount of time providers spend filling out repetitive paperwork, it stands to help physicians better balance their workload and dedicate more of their day to helping patients. Research on Suki’s capabilities, too, are promising. During Suki’s trial period, analysts found that the average time physician spent in their EHR fell from 4.3 minutes to just 1.8 minutes — an efficiency increase of over 200%. 

 

As a result, Suki has seen widespread adoption across provider offices. In June, the company came to an agreement with Unified Physician Management to distribute Suki across Unified’s network of over 1,500 women’s health care providers. In March, Suki’s developers collaborated with Sutter Health to bring the digital assistant to the system’s campus in Northern California. Suki might not be the flashiest tool — but in terms of time-saving and pragmatic usefulness, it is invaluable. 

 

While AI’s applications for patient care have captured the lion’s share of media attention and high-profile investment, the technology’s real impact will likely take place in administrative services and diagnosis assistance sectors. However, even these tools will only be useful if they are thoughtfully integrated into day-to-day operations and do not alienate the doctors and patients they are meant to support.