Daily Dose of Healthcare AI (Fri. 4/20)

By | Biometrics, Cardiology, Daily Dose -- News, Deep Learning, Healthcare Reform, Law & Ethics, Medical Practice Areas, Physicians' Practice / Job, Radiology, Robotics


Here’s the Path Forward for Health Reform in 2018 (Heritage Foundation)

New imaging technique helps radiologists downgrade benign breast masses (Radiology Business)

Could a hacker expose a patient to excessive radiation during a CT scan? Maybe. (Health Imaging)

AI-enhanced instrumentation – the fusion of deep learning and medical sensors creates dramatic improvements (Diginomica)

The Roadmap To Introducing AI And Robotics In Healthcare (Forbes)

2018: A Year of Regulation In Tech (Tech Native)  

Testing algorithms key to applying AI and machine learning in healthcare (HealthcareITNews)

Digital Health Briefing (Business Insider) — health insurers worry; retinopathy; standardize telemedicine

Separating the hype from reality in healthcare AI (HealthcareITNews)

Poor, developing countries will benefit from AI (The Gulf Today)

Microsoft AI Helping us Accurately Predict Cardiac Diseases: Apollo (TECH News 18)

Health IT Infrastructure Necessities for AI Cybersecurity (CIO Review)

A medical charter to address physician burnout, promote wellness (FierceHealthcare)

Researchers say use of artificial intelligence in medicine raises ethical questions (Medical Technology News)

Pentagon wants to spot illnesses by monitoring soldiers’ smartphones (Washington Post)

New AI Sensors Can See Through Anyone’s ‘Poker Face’ (Interesting Engineering)

AI Learns a New Trick: Measuring Brain Cells (WIRED)

MIT Researchers Have Created a Bizarre Headset That Lets You Communicate Without Speaking (Science Alert)

MIT Backs Away From Startup That Aims to Preserve Your Brain and Memories After You Die (GIZMODO)

Enterprise AI will make the leap — who will reap the benefits? (TechCrunch)








How Real-Time AI is Accelerating the Disruption of Healthcare — An Interview with Nuance Communications

By | Corporations, Interviews, Miscellaneous

Open your favorite news app and browse to the tech section. Chances are good you’ll find at least one feature on artificial intelligence (AI). AI continues to dominate the conversation, both in the boardroom and at the kitchen table. As people become increasingly comfortable relying on technology and virtual assistants to make life easier, we begin to wonder how we might take advantage of that to address many of today’s challenges in healthcare.

Burlington, Massachusetts-based Nuance® Communications (Nuance) has been innovating virtual assistant technology and optimizing AI for the doctor-patient relationship.

I recently caught up with Nuance’s executive vice president and general manager, Satish Maripuri, to learn more about his vision and the potential to make a significant and more immediate impact on healthcare.

How do you define Artificial Intelligence?

Satish Maripuri: That’s a broad question. The simple answer is that Artificial Intelligence, or AI, means that we are teaching machines to learn. However, I think it’s helpful to consider AI as the convergence of compute power, mobility, cloud, and big data technologies that, when combined, can augment human intelligence and truly disrupt how we accomplish tasks. In the healthcare space, we invest in AI because we trust its ability to help us address three key challenges: healthcare costs, quality, and patient outcomes. Additionally, there is a real need to make lives easier and more meaningful – and today, people are more open to using technology to do so.

Many forward-thinking scientists, philosophers, and business leaders fully embrace that AI ultimately may mean replacing humans with machines. Instead, I think of AI as a way to extend and augment human thinking, decision making, and creativity. When we apply AI in specific situations with insights and recommendations, we are augmenting intelligence and the knowledgebase physicians use to make decisions – an area where we focus at Nuance Healthcare. This area of augmented intelligence includes an element called conversational AI, which allows folks to engage in natural interaction with computers using either text or speech to gain immediate access to highly actionable information.

What subsets of Artificial Intelligence such as machine learning or deep learning do you believe have the greatest impact on the development of AI as it is applied to healthcare?

Satish Maripuri: We need to envision a world where doctors, nurses, radiologists, and care teams, in general, can experience ambient intelligence and mobility across devices in any care setting, thereby liberating them to care more for patients rather than be constantly tied to technology. This is key and allows everything to fall into perspective. We should look at this as a combination that is more powerful than the sum of its individual parts.  The impact AI will have on healthcare comes from a new and expanded use of AI-powered solutions, such as conversational virtual assistants combined with mobility. It’s in this area that you begin to see highly intelligent systems that can act in partnership with human intelligence in powerful ways. Nuance Healthcare AI focuses on augmenting a physician’s capabilities with data and intelligence that were previously unavailable or hard to access. What’s key here is that the interaction is natural and an integrated part of a physician’s regular workflow. That’s not only solving problems but opening avenues for improved diagnoses and treatments.

I’ll give you some examples. A radiologist today deals with hundreds, if not thousands, of cases over a brief time-period. He or she must race to produce accurate reports – often in life-or-death scenarios. With AI, however, we can augment a radiologist’s expertise with our deep-learning algorithms, which can be trained to recognize, for example, a certain type of brain tumor, a vascular condition, or a case of pneumonia. Then, our AI prioritizes such cases on the radiologist’s workflow list ahead of the hundreds of other studies in their queue. Furthermore, radiologists spend much of their day reviewing patients’ scans to rule out medical issues, but an algorithm can be trained to do that rapidly, allowing the radiologist to review and confirm results, as well as to immediately address more severe cases within their workflow. Clearly, that increases productivity and accuracy for the radiologist. Since 70 percent of radiologists use Nuance diagnostic solutions in the United States, the widespread development, adoption, and rapid deployment of imaging AI algorithms are easily integrated into the existing workflow of thousands of radiologists. As a result, these AI algorithms are both usable and used – at scale.

These AI algorithms also improve patient experiences and outcomes. An example of that is in the diagnosis of lung nodules, or small masses of tissue, observed in a chest CT scan. An image characterization algorithm can assess a nodule’s size and location, making measurements and evaluations with comparisons to prior studies and reports to save time. It also analyzes the patient’s electronic health record (EHR) for prior history and relevant risk factors to make evidence-based clinical recommendations specifically for that patient. Researchers in Switzerland recently reported that an algorithm was more accurate in characterizing lung nodules compared to using clinical and demographic data alone. And an AI software developer reported that its algorithm could rule out malignancy in up to 20 percent of benign nodules. That eliminates the need for additional workups and procedures. It’s incredibly promising.

Now, imagine that AI-driven partnership in a physician’s day. Think of the number of times you’ve watched your physician turn away to type instead of interacting with you. AI lets physicians talk with you, dictate a solution, and receive corresponding facts and intelligence in the normal course of your exam. This happens in conversation with you, through a virtual assistant purpose-built for healthcare, sitting on the physician’s desk, or any mobile setting of their choice. The virtual assistant can then do the following: quickly find prior test results, without the physician spending time looking for them; integrate medical history with current symptoms; add clinical knowledge; suggest treatment options, and call out possible complications or drug interactions. AI currently can augment physician intelligence, which makes physicians more productive and accurate. Most important, it removes the heavy administrative burden placed upon physicians and brings the personal aspect of care back to the physician-patient experience.  These are simple but powerful examples of how real-time AI can have a profound impact on both the physician and patient, and it’s a scenario that’s happening today.

What do you use as your underlying repository of intelligence to make recommendations to doctors?

Satish Maripuri: To make effective recommendations to doctors, we need to keep a few factors in mind. First and foremost, usability is critical. What physicians don’t need is yet another piece of technology that intrudes in the patient care process. So, the first factor is to give the physician a virtual assistant that captures patient interactions during the exam. This is what we call ambient speech. This virtual assistant must be efficient and accurate enough to not only capture the conversation, but also augment it with relevant and timely clinical information.

The second factor is to understand and match the physician’s normal workflow. When the physician prescribes medication, narrates a diagnosis, or orders a follow-up, the virtual assistant must enter that into the system, reference elements of the patient’s history, look up drug interactions, and so forth. That sounds straightforward, but in healthcare, it’s tough to do: the question becomes how to inject pertinent knowledge into the workflow and place it at the physician’s fingertips.

Another factor is the ability to build and access a repository of clinical knowledge and intelligence, including structured and unstructured data from multiple sources, such as pulling information from EHRs and summarizing it for the clinician. At Nuance, we also leverage healthcare domain expertise to build out proprietary clinical strategies, as well as synthesize sources of healthcare information that is part of the public domain.

Finally, keep in mind that, in healthcare, it is very difficult for a solution to be everything for everyone. There are countless use cases out there, so you need to narrow your focus, or you’ll sacrifice accuracy. Similar to the example I gave earlier about lung nodule diagnosis, brain scans can be very different from one another, and the ability of algorithms to provide intelligent augmentation and help set priorities in the context of that specific use case is what’s important. There’s a vision that drives us at Nuance Healthcare, and it’s a scenario I would like all of us to experience sooner rather than later. As a patient, I prefer to explain or narrate, why I’m visiting the doctor, rather than fill out forms in the waiting room. With our technology, we can engage the patient well before they arrive at the physician’s office. The narrative can begin at home, where a patient can describe his or her illness.  Based on that intelligence and natural language processing, an initial set of facts is gathered for the physician to review. We can extract and highlight a set of symptoms, potentially intersecting with a past record; alert the nurse or physician that the prior history had been integrated; and that some new anecdote based on the new narrative should be noted. Then, during the office visit, the provider can look at a holistic patient record and communicate with the patient instead of staring at a computer screen. This real-time ambient capability understands the patient, the physician, and the nurse, and separates their respective voices. It captures that knowledge in the workflow and the documentation, prompting faster and more accurate diagnosis, with the documentation complete at the end of the visit. The progress we already have made in AI has given us the components we need to combine these factors and make our vision a reality – and this reality is not far away.

What do you think will drive more disruption and the use of AI?

Satish Maripuri: Disruption comes from bringing organizations together across industries with a unique combination of capabilities to drive change. In the case of AI, we look for partnerships inside and outside of healthcare that can drive innovation effectively and more quickly create value. Right now, we are focused on some unique AI-related partnerships that allow radiologists to be the technology trailblazers they always have been.

Radiologists began trailblazing technology with the introduction of picture archiving and communication systems (PACS) more than 20 years ago. Today, the latest advancements in radiology are highly receptive to the power of AI to improve productivity and accuracy while reducing the repetitive tasks that lead to burnout. But that power needs to be easily accessible and integrated into the radiologist’s normal workflow. To do so, we are partnering with NVIDIA, the American College of Radiology, Ohio State University, Lunit, RADlogics, Aidence, Teracon and Partners HealthCare to innovate in the practice of radiology. Partnering with NVIDIA, for example, we aim to accelerate the pace of algorithm development and adoption with the AI Marketplace, a community of potential developers who build AI algorithms; run accurate analysis using data already available; and then make the algorithms available back into the AI Marketplace, which is much like the Apple App Store. The goal is to allow faster and more accurate analysis and diagnosis, making the AI Marketplace not only a game-changer for radiologists, but a potential life-changer for patients who depend on fast and accurate interpretation of their images. I believe that the creation and adoption of AI algorithms through the AI Marketplace will evolve as much as consumer and business apps have evolved for smartphones and that the ubiquity of AI will force rapid investments into the AI inferencing layers of the runtime hosting stack. In addition to partnerships with NVIDIA to drive the development of these algorithms through the graphics processing units (GPUs), we anticipate further innovative partnerships to accelerate the runtime AI inferencing engines to drive the computational needs of these algorithms. There is much to be done in this area and we are just getting started.

Jeff Bezos has said that we are just dipping our toe in the water when it comes to AI. Where do you see AI in healthcare in 5 to 10 years, and what’s your utopia for healthcare and AI irrespective of Nuance’s commercial goals?

Satish Maripuri: I think Bezos absolutely is right, but the rate at which the “game” is going to change will accelerate. I think we will see the greatest impacts in the areas of productivity and quality of care, but keep in mind all of this is happening against the backdrop of changing healthcare reimbursement models. Models based on demonstrable outcomes drive the entire healthcare organization to administer care as productively and efficiently as possible and must have accurate documentation. I believe the inevitable impact of AI is a much-needed, timely disruption. But, there are other urgent reasons why we must hit the accelerator in this space: we must reduce the cost of administering care; we must address physician burnout rates, which are now running as high as 60 percent; and we must improve the patient experience. Accelerating the pace of innovation relative to the cost of care, physician burnout, and improved patient experience may be a near-future form of utopia.

I also passionately believe that there are three other factors that will change the way medicine is practiced today. These include:

  • More technology and solutions will be placed in the hands of you and me as consumers/patients. This will drive more consumer engagement in self-diagnosis, online engagement, and digital health techniques – essentially tipping the balance towards the consumer/patient;
  • Virtual assistants driving ambient intelligence will be ubiquitous given the mobility of the care team; and
  • Clinical decision support and intelligence will only get better as more knowledge needs to be at the fingertips of the patient and the care team. This is what drives our healthcare strategy.

Tell me more about physician burnout and how AI can address this epidemic.

Satish Maripuri: The unrelenting race that physicians and caregivers are put through every day is why they are experiencing an ever-growing level of burnout – a level that is gaining steam. The hopes that technology – and more specifically the EHR – would save the day and give back valuable time has, unfortunately, generated new complications of its own. As technology leans toward serving protocols and regulations, caregivers are now serving the technology. As a result, physicians are reliving their day – to revisit patient records to finish EHR documentation, resolve coding queries, and reorganize their personal lives to make it all happen. This is where we can help. In this world of restarts, errors can slip through causing queries, denials, rework, and costs. More importantly, though, the right diagnosis may not get captured. I don’t mean an inaccurate diagnosis. I mean the right one. From our point of view, there’s a big difference between something that is merely accurate and something that is right. Just because a diagnosis isn’t wrong doesn’t make it the absolute right one either. The difference for the patient is obviously the treatment plan – but for the provider, this often means not getting reimbursed for the care delivered. The difference is so substantial that we’ve built our business around it. We believe that every instance of patient information must always be right, the very first time it hits the EHR, and every instance afterward, due to the chain of care-related events that cascade from it. For the patient story to always be what we call “first time right”, the clinical documentation and decision support solutions relied upon must have a solid foundation, so physicians and care teams can trust them. They must be built by clinicians who understand care – not people who understand codes. They must be complete and provide choice to match any use case while being deeply embedded into the EHR systems – and match the way caregivers think, talk, and work. They must drive mobile effectiveness letting physicians and care teams use their device of choice – and use it anywhere and at any time. They must be supported 24/7 because technology must always serve the physician and their patients – and not the other way around. They must be smart and surround physicians with intelligence at every turn. This includes AI-enabled technologies for speech capture, computer-assisted physician documentation (CAPD), and clinical documentation improvement (CDI). Because technologies that provide real-time intelligence paired with decision support dramatically improve the patient story. This combination of AI-enabled technology embedded in the workflow with human at-the-elbow support is something quite unique to Nuance—it’s what empowers physicians to no longer relive their days serving the EHR—but rather better serve their patients and themselves.

Mark Cuban predicts that the first trillionaires will be those who master AI and use it in a way that no one else has thought of. When you look at the landscape, who do you think might win and why? I am not asking for the name of specific companies or persons, but where do you think extraordinary advances will occur?

Satish Maripuri: The evolution of personal computing, the Internet, mobility, IoT, the cloud, big data capabilities, robotics, machine learning, and now deep learning with the combination of AI and virtual assistants – all of these are the foundation that will enable the next generation of intelligence. There are two ways to think about where the disruption is going to come from. The first is incremental innovation, evidenced by the example I gave you in the radiology sector where an algorithm enhances the radiologist’s workflow and outcomes. With incremental innovation, you are inserting an intelligent piece of technology into the existing workflow to make it better.

The second is when one can imagine a new paradigm and reimagine the clinical workflows and application of the intelligence. It’s not just about doing something better, but instead reimagining the entire process and the outcomes. This is where real disruption is going to come. If I were to place a bet, I would bet that disruption will come from left field, where someone who is trying to innovate the next device creates something completely different, perhaps something that eliminates the first eight steps in a workflow that others are trying to improve.

Disruption is about using the capabilities of one breakthrough to lead to another and another. The “winners” will have the ability to be nimble, to develop new capabilities on top of existing capabilities, or to go into an adjacent strength – thinking about a problem in new and different ways.

Innovators are extraordinary and will inevitably grasp and even master AI; they also will make advances in technologies yet to be realized. The opportunities to innovate are limitless, and anyone’s to own. Rather than fight to stay relevant, we need to reimagine the way we can take advantage of the technology enablers and the way things are done today – essentially disrupt ourselves.

Google’s AlphaGo Can Shape The Future Of Healthcare

By | Corporations


The Medical Futurist is one of my favorite sites — it offers first-rate content at every turn.

Here, Bernard Mesko, MD, Ph.D, and TMF‘s self-proclaimed “geek physician”, describes how Google’s AlphaGo algorithm, which beat 18 time World Champion Lee Sedol (9th dan) of Korea in the complicated game of Go, will have serious implications for medicine and healthcare.



Daily Dose of Healthcare AI (Thurs. 4/18)

By | Physicians' Practice / Job, Regulation, Venture Capital


Medical artificial intelligence firm BenevolentAI secures $115m in funding — The AI pharmaceutical startup has been valued at $2 billion. (ZDNet)

MIT Spinout, ReviveMed, Raises $1.5M to Advance its AI-Driven Metabolomic Platform for Drug Discovery (Business Wire)

No Doctor Needed: US Health Regulators Approve AI Device To Detect Eye Diseases (IFLSCIENCE)

Artificial intelligence will put a premium on physicians’ knowledge and decision-making skills (STAT)

2018: A Year of Regulation In Tech (Tech Native)  

Testing algorithms key to applying AI and machine learning in healthcare (HealthcareITNews)

New Group Promotes AI, Robotics, and Automation in Healthcare: Goal Is to Improve Patient Access to Quality Care (Genetic Engineering & Biotech News)

Separating the hype from reality in healthcare AI (HealthcareITNews)

Niti Aayog (India) pilots AI-based initiatives in agriculture, education and healthcare(Factor Daily)

Microsoft AI Helping us Accurately Predict Cardiac Diseases: Apollo (TECH News 18)

AI in practice – medical apps have their own health warnings (diginomica)










Ethical Technology Will Require A Grassroots Revolution

By | Law & Ethics

This WIRED article focuses on Tristan Harris, a former Design Ethicist at Google. The article provides a more interesting perspective to ethics and technology than the majority I’ve read, with a deep look at the relationship between technology and mankind.

According to Harris, the complexity of technology such as iPhones stimulates the mind via apps, for example, “has become an existential threat to human beings,” — language that closely parallels language used by Elon Musk. Email alone is literally addictive, stimulating the release of dopamine with each notification of received mail. Those neurological rewards (dopamine) kill neurons when overstimulated by video games or time spent on Facebook, according to Robert Lustig, a pediatric endocrinologist at UC San Francisco (UCSF).

Harris is calling on the companies themselves to redesign their products with ethics, not purely profits, in mind, and calling on Congress to write basic consumer protections into law.

He states:

We live in an environment, this digital city without even realizing it. That city is completely unregulated. It’s the Wild West. It’s like, build a casino wherever you want with flashing lights and flashing signs. Maximize developer access to do whatever they want to people. Shouldn’t there be some zoning laws?

It’s acutely apparent that those laws won’t just happen on their own. They require a groundswell of public pressure on both tech companies and politicians. If there was ever a time to apply such pressure, it’s this age of unprecedented activism. After all, if tech platforms are influencing the way people think about the world, the way they think about each other, and the way they think about themselves, then they’re also influencing the way we talk about women’s rights, the climate, and immigration (and how we vote, a timely example). (parenthesis added)

We see another human-tech relationship in the domain of AI. The ethical, legal, and regulatory dimensions of AI are, I believe, the most important that we must confront in order to both unleash AI’s beneficent potential while simultaneously protecting ourselves from possible outcomes such as The Singularity. While the notion of a tipping point at which machines outsmart humans–i.e. they would pass the Turing Test, which at that moment would instantly become obsolete–and then think of machines (not just cars) having autonomy that humans cannot control. Already, rogue AI agents at both Google and Facebook wrote programming languages for inter-company communication between machines that some of the smartest people in Silicon Valley could not decipher. Each company pulled the plug on the rogues. What happens when rogue communication–we’ve just seen the tip of that iceberg–occurs between companies? Or between a company and a nation?  That’s worth a double take.


Quote of the Day: Elon Musk’s Tenacity

By | Miscellaneous


We often come across news about someone who may not directly impact healthcare AI. These may include business leaders, politicians, academics, and lots of dreamers (in the apolitical sense), so we’ll include tidbits here and there for fun.  Different viewpoints stretch the mind.  Here’s one of the icons of the post-Steve Jobs era, Elon Musk, founder of Tesla (lots of AI, see above); SpaceX; and The Boring Company.

When @elonmusk gets an idea, he is like a dog with a bone. Single-minded doesn't begin to describe his determination. VIA aidaily.news

Steve Hanley, GAS2

When Tesla CEO Elon Musk gets an idea into his head, he is like a dog with a bone. “Single minded” does not begin to describe his determination to get what he wants.


Or as Dr. Larry Kerschberg says: “When Elon has an idea, he starts a company.”

Is AI Riding a One-Trick Pony? Deep Learning & Backpropagation

By | Deep Learning


This is a great piece by James Somers in MIT Technology Review about the germination 30 year ago of an idea — backpropagation — that, in the words of Princeton computational psychologist Jon Cohen, is “what all of deep learning is based on — literally everything.” The idea of so-called “backprop” was set forth in a published 1986 paper by Geoffrey Hinton and others. Hinton, the lead scientist on the Google Brain AI team, is considered the father of deep learning. 

Author James Somers challenges us from the start:  

When you boil it down, AI today is deep learning, and deep learning is backprop—which is amazing, considering that backprop is more than 30 years old. It’s worth understanding how that happened—how a technique could lie in wait for so long and then cause such an explosion—because once you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one. 

A bit of history:

In the 1980s Hinton was, as he is now, an expert on neural networks, a much-simplified model of the network of neurons and synapses in our brains. However, at that time it had been firmly decided that neural networks were a dead end in AI research. Although the earliest neural net, the Perceptron, which began to be developed in the 1950s, had been hailed as a first step toward human-level machine intelligence, a 1969 book by MIT’s ­Marvin Minsky and Seymour Papert, called Perceptrons, proved mathematically that such networks could perform only the most basic functions. These networks had just two layers of neurons, an input layer and an output layer. Nets with more layers between the input and output neurons could in theory solve a great variety of problems, but nobody knew how to train them, and so in practice they were useless. Except for a few holdouts like Hinton, Perceptrons caused most people to give up on neural nets entirely.

Hinton’s breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. But it took another 26 years before increasing computational power made good on the discovery. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. “Deep learning” took off. To the outside world, AI seemed to wake up overnight. For Hinton, it was a payoff long overdue.

Read More