Have you ever wondered how Netflix knows what movies to recommend for you—or realized that if it is suggesting kids’ movies, that means someone else in the house must have been using your account? It turns out that this sort of technology, which relies on what is called “big data,” is also useful for calltaking and emergency response.
Consider the size of the data that we collectively work with. Based on some available information, if there were global coverage of emergency numbers, there would be approximately seven billion emergency calls (911, 999, 998, 997, 112, 111, 102, etc.) per year worldwide. That is around one call per person on average, although some studies have found that the repeat callers account for a larger portion of the calls, with more than two calls per year on average from each person that called. In any case, this generates huge amounts of data, more than enough to qualify as “big data.”
Not unlike that Google search, imagine the day in the not too distant future when you’ve began to type the caller’s problem statement, and the ProQA software knows what the caller’s answers are likely to be and autocompletes them before you have even asked all the questions. I predict that day is coming soon, although this kind of system requires large amounts of data to work from. How can that be possible? Let’s take a brief look at how these systems work and how they might improve the accuracy and speed of your calltaking process.
In 1950, Alan Turing, a famous computer scientist, wondered, “Can computers think?” He proposed answering this question with the so-called Imitation Game. In this game, an interrogator would ask two players (one a human and the other a computer) a series of questions designed to determine which of the two players was the computer and which the human. If they reached the point where the player could not tell the difference between the two players’ answers, it would be determined that computers can, in fact, think.
Now, imagine a time in our near future when the computer’s flawless memory and calculations are available to the very real and human calltaker. Using Machine Learning (ML) tools, also known as Artificial Intelligence (AI), these computers will recognize patterns that you did not even know existed—and do it almost instantly, starting with the phone ring. Complicated patterns and connections could be discovered that would be too difficult for a human to identify separately.
Although we recognize patterns all the time, as humans we only have the capability to handle a limited number of variables, while ML/AI tools can very quickly identify patterns across 15, 50, 100, and even thousands of different inputs and determine precisely what is important and what is not. If you were to draw out these concepts visually, it would look like a map of our nervous system, or perhaps like a pattern of snowflake under a microscope. The purpose of using such pattern recognition, remember, is not to replace the emergency dispatcher—far from it. Rather, the process can recognize patterns across millions of calls, providing insight to the calltaker as they proceed through a call, similar to how doctors’ electronic decision support systems can offer them diagnostic and prescribing advice in real time.
One of the exciting aspects of this technology is that it is what is called a self-learning algorithm; that is, it gets more accurate as time goes on and it is presented with even more data. This kind of algorithm can learn from all dispatchers using the same system anywhere in the world simultaneously, and will learn what aspects of the calltaking and key questions might be regionalized or cultural, or even those that change depending on the type of emergency. The questions that you need ask and have answered might even change depending on the calltaker and the caller, customized for each scenario.
Another ML/AI method is known as Natural Language Processing (NLP), which aims to “understand” the meaning or intent of simple statements. When trained by humans, this method works well in handling text such as a physician’s or paramedic’s narrative in a medical record. It is less efficient, though, at handling information that is less structured, such as the dispatch problem description, which includes all forms of language, as well as word choices with multiple meanings.
Think about all the different responses you get when you ask “Okay, tell me exactly what happened?” Then consider how differently each calltaker might enter these responses into ProQA. As standard as data processing may seem, it is a process that works better when there is only one thing or one kind of thing happening. As you know, that often is not the case at dispatch.
For this kind of complex question, we might use a process known as clustering, which tries to identify the “chief complaint” or “call type,” so that everything that follows can be standardized.
For example, if the caller reports, “There was a shooting,” that will result in a rapid police response. If, however, the caller reports, “The victim is pulseless and not breathing,” you will instruct them to begin CPR and urgently send the paramedics. This system is designed to minimize the time required to determine what level, or priority, of dispatch is needed and what instructions should be given to reduce morbidity and mortality. However, at least for now, it does not tolerate ambiguity or changing circumstances very well.
That’s why, in my future work, I plan to evaluate several different algorithms that could be useful (behind the scenes) to help you more quickly and with more accuracy determine what responses are needed and which instructions should be provided.
Citation: Nudell, N. Artificial Intelligence for us? Annals of Emergency Dispatch & Response. 2017;5(1):5.