Skip to main content

AI that detects cardiac arrests during emergency calls will be tested across Europe this summer

AI that detects cardiac arrests during emergency calls will be tested across Europe this summer

/

The software listens in to calls and helps emergency dispatchers make judgements

Share this story

Illustration by James Bareham / The Verge

A startup that uses artificial intelligence to help emergency dispatchers identify signs of cardiac arrest over the phone will begin testing its software across Europe this summer.

Danish firm Corti says its algorithms can recognize out-of-hospital cardiac arrests (those that occur in the home or public) more quickly and accurately than humans. The software has already been deployed in Copenhagen, but this year, it will start four new pilots in as-yet-unnamed European cities in partnership with the European Emergency Number Association (EENA).

Quick recognition of cardiac arrests is vital, as every minute that passes without treatment reduces an individual’s chances of survival by 7 to 10 percent. Corti’s software works by listening in during emergency calls and looking out for a number of “verbal and non-verbal patterns of communication.” These include cues like a caller’s tone of voice and whether or not the subject is breathing.

Corti’s software acts like a personal assistant for the dispatcher during the call, prompting them to ask the caller certain questions and then making a recommendation as to whether or not it thinks the individual is suffering a cardiac arrest. The dispatcher can then summon an ambulance or give instructions for administering CPR. You can watch a demo of the system in action below.

(A quick aside: although the two terms are often used interchangeably, a heart attack is not the same as a cardiac arrest. A cardiac arrest is an electrical fault that causes the heart to stop beating, while a heart attack happens when a blocked artery limits the circulation of blood around the body.)

Corti’s software has performed impressively in the company’s own tests. In one study on a database of 161,650 historical emergency calls, the startup’s software identified 93.1 percent of out-of-hospital cardiac arrests (or OHCAs) compared to 72.9 percent recognized by the actual human dispatchers. It was also quicker, spotting signs of a cardiac arrest in 48 seconds on average, compared to 79 seconds for humans. Similar figures were reported during ongoing tests in Copenhagen.

Despite this success, there are still some unanswered questions about the software and its integration into emergency medical services. For example, Corti has yet to publish its study of the 161,650 calls in full, meaning that certain key figures (like the software’s false positive rate or the number of times it incorrectly identifies cardiac arrests) are still unknown. Speaking to The Verge, Corti’s CTO Lars Maaløe said these statistics were being processed, but that the rate was “comparable to that of humans.”

Corti seems to outperform humans, but some questions remain

It also might worry medical professionals that Corti’s software cannot explain how it makes its decisions. Like a lot of machine learning software, Corti’s algorithms learn by combing vast datasets, looking for the patterns that match certain outcomes (in this case, whether or not someone is having a cardiac arrest). But, explaining what patterns it spots and how it weights them is not part of the software’s design. Maaløe tells The Verge that Corti’s team knows that certain words “have a higher impact on the final output than others,” but he says this analysis is necessarily “imprecise.”

Corti is confident that its software makes the right decisions, and its tests seem to bear that judgment out. But it’s possible that the AI will miss certain nuances or make bad judgments when faced with unfamiliar situations. Maaløe gives the example of someone calling to a report that a loved one is suffering a cardiac arrest. He says that in these cases, the callers tend to be more confident that the person is breathing because they want it to be true. Can AI pick up on these sorts of human subtleties? It’s for that reason that Corti’s software doesn’t make the decision itself; it only offers guidance to a trained dispatcher.

These problems aren’t specific to Corti, though. They’re a challenge for the whole health care community. As diagnostic AI takes on a bigger role, medical practitioners and the general public will have to decide what they think is a reasonable level of transparency to demand from such algorithms. To what degree will we be willing to just trust a machine?

In the meantime, Corti is going to be listening in to more emergency calls around Europe this year. It’ll be learning — and hopefully saving lives, too.