One of the most important elements of the scientific field is the use of clear, commonly-accepted definitions that can be used to measure processes and create meaningful change. These well-defined metrics allow us to confidently progress by building on previous work carried out by others. Previous authors1 have already identified that research in the dispatch  center should focus specifically on the development of performance measures for use in performance management, audit, and research.  The question then becomes, which measures should we focus upon, how do we define them, and what value do they hold?

Before agreeing to definitions or gathering the data from any field of dispatch, however, it is vital to remember that, regardless of the accuracy of data gathering based on well-defined and commonly-accepted goals, it would be remiss to assume that it is never possible for the  behavior of the system user to influence the data recorded. This is even more relevant in the nurse triage environment, where nurses often hold the autonomy of overriding clinical system decisions when they feel that the recommended care needs adjusting based on their clinical experience. Belman et al documented agreement regarding disposition decisions among call center nurses and between nurses and protocols to be close to 80%.2 Two factors contributed to disagreement with protocol dispositions. These occurred when nurses did not follow protocols or did not act on information provided by the caller. These data suggest a need for additional attention to communication skills and to protocol adherence in training and ongoing quality improvement practices. Quality assurance (QA) is a key component of the overall emergency telecommunication quality improvement process. It measures, evaluates, and quantifies performance. Quality improvement (QI) then takes the findings of the QA process and develops strategies and training to improve overall performance in an attempt to reduce any subjective variance between users. In a well-governed agency, user behavior is significantly controlled, and objectivity and authority are maintained through this robust QA/QI process.

In many places in the world—for example the United Kingdom, Australia, and parts of the United States of America—clinical nurse triage is an integral part of the emergency medical dispatch process. There are numerous systems based on various philosophies available to aid the nurse in their role. There is even variation between agencies using the same decision support system, as some agencies opt to only triage patients of a certain age range or to only triage patients with certain symptoms, thereby potentially reducing clinical risk and achieving better outcomes. Variation may also present in the form of agencies using different resource levels operating the decision support system, with some agencies exclusively using nurses and other agencies using paramedics and nurses.3 Regardless of the system, philosophy, or agency, our argument here is that there should be common definitions and comparable processes to measure, regardless of which system and where it resides. One important consideration in each telephonic consultation is to evaluate whether telephone-based nurse triage is appropriate.4 However, prior to even establishing this, we need to know what data is available, how it has been derived, and what process generated it. This in itself can be a cumbersome task fraught with difficulty.

The providers of clinical decision support software also need to take some responsibility. The data their systems produce needs to be easily accessible, in a format that can be integrated with and compared to other data, and produced in a manner that is easy to interpret and understand. In terms of time and resources, data that is considered ‘difficult to access’ can be a real hindrance to any research project. That said, however, the data need to be produced in a consistent manner from user to user. One benefit of using a standard clinical protocol (versus a guideline) is that it is designed to draw the user through a predictable, accurate, repeatable, verifiable process. This means that data is produced in a consistent, reliable manner.

Through predictable use (compliance to protocol) and multiple-site replication, cause-and-effect relationships can be validated. The ability to engage in multiple-site studies—a practice necessary to a strong study design—is only possible through the use of a unified protocol, a protocol which is exactly the same in all locations it is used. Perhaps some onus should be placed upon the system provider to enforce a methodology for use of their product. Combined with a robust QA process (to ensure compliance to the methodology), this may be the best way to produce comparable data to improve system performance.

Once we are sure the data can be accessed and understood, then it can be defined. Creating universal definitions can be a time-consuming project. The seemingly simple definition of “self care,” for example, will vary from agency to agency, and to come to an agreement may require some skilful arbitration. Some organizations will follow a purist view and define a “self care” (also known as “home care”) disposition as one where the patient was given instructions on how to care for themselves at home (for instance, the middle-aged non-smoker with a new onset dry cough after the onset of an upper respiratory tract infection and no comorbid conditions) without the need to access other resources in the community. Other agencies may include other resources (for example, a pharmacist visit) and other non-urgent primary care dispositions (for example, a non-urgent visit to their primary physician) under the definition of “self care.” The lack of a universal definition in this example clearly demonstrates how various interpretations of disposition definitions can influence performance reporting.

Any study of the appropriateness of the nurse-assigned level of care is problematic due to differences in opinion regarding the best way of handling various medical problems. This results in differing advice for the same problem.5In-depth review of clinical records by a multidisciplinary expert panel is a pragmatic means of assessing safety in a context in which the diagnosis is often unclear and objective outcome measures are lacking.6The data-collection process is often further complicated by the difficulty in accessing clinical records of patients at the various healthcare facilities where patients were ultimately assessed and treated.  The future direction of emergency communication nurse telephone triage in the emergency medical dispatch environment will depend on detailed, outcome-based studies for clinical nurse triage protocols and the measurement of clinical outcomes against system recommendations.

REFERENCES
Download Original Paper

References

  1. Snooks H, Evans A, Wells B, Peconi J, Thomas M, Woollard M, Guly H, Jenkinson E, Turner J, Hartley-Sharpe C. What are the highest priorities for research in emergency prehospital care? Emergency Medicine Journal. 2009;26:549-50. 
  2. Belman S, Murphy J, Steiner J, Kempe A. Consistency of Triage Decisions by Call Center Nurses Ambulatory Pediatrics. 2002;2:396 400.
  3. Dale J, Williams S, Foster T, Higgins J, Snooks H, Crouch R, Hartley-Sharpe C, Glucksman E, George S. Safety of telephone consultation for ‘‘non-serious’’emergency ambulance service patients. Qual Saf Health Care. 2004;13:363–373. 
  4. Car J, Sheikh A. Telephone consultations. British Medical Journal. 2003;326:966–969.
  5. Marklund B, StrömM, Månsson J, Borquist L, Baigi A, Fridlund B. Computer-supported telephone nurse triage: an evaluation of medical quality and costs. Journal of Nursing Management. 2007;15:180–187.
  6. Dale J, Williams S, Foster T, Higgins J, Snooks H, Crouch R, Hartley-Sharpe C, Glucksman E, George S. Safety of telephone consultation for ‘‘non-serious’’emergency ambulance service patients. Qual Saf Health Care. 2004;13:363–373.