Article Text

Download PDFPDF

Dissemination, implementation, and de-implementation: the trauma perspective
  1. Vanessa P Ho1,
  2. Rochelle A Dicker2,
  3. Elliott R Haut3,4
  4. Coalition for National Trauma Research Scientific Advisory Committee
    1. 1Departments of Surgery and Population and Quantitative Health Sciences, MetroHealth Medical Center, Cleveland, Ohio, USA
    2. 2Department of Surgery, David Geffen School of Medicine, Los Angeles, California, USA
    3. 3Departments of Surgery, Anesthesiology and Critical Care Medicine, and Emergency Medicine, Johns Hopkins Medicine, Baltimore, Maryland, USA
    4. 4Department of Health Policy and Management, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland, USA
    1. Correspondence to Dr Elliott R Haut, Departments of Surgery, Anesthesiology & Critical Care Medicine, and Emergency Medicine, Johns Hopkins Medicine, Baltimore, MD 21205, USA; ehaut1{at}jhmi.edu

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Introduction

    Trauma surgery moves fast. Clinical decisions and treatment of injured patients must occur expeditiously, or patients suffer. Trauma research also moves fast, and new high-quality studies about treatment of injured patients frequently reshape the field and our understanding of best practices. Historically, medicine relied on the dissemination of best practices through publication of manuscripts and the endorsement of trusted physicians to change practices. However, implementation of research has proven to be slow. When research does not reach the bedside, patients are not offered proven therapies or are treated with dated or ineffective therapies. Implementation science, or the rigorous studying of the timely uptake of evidence into routine practice, is the next vital frontier in surgery,1 with the potential to have a profound positive effect on the care provided to our patients.

    The purpose of this paper is to describe the principles of implementation science and propose their wider use in trauma care. This paper is published as an initiative of the Coalition for National Trauma Research (CNTR) to further advance high-quality research and promote sustainable research funding to improve the care of injured patients, commensurate with the burden of disease in the USA. We will review definitions of implementation, dissemination, and de-implementation, as well as research frameworks, study design, and funding opportunities.

    Implementation science is an umbrella term that includes implementation research, dissemination research, and de-implementation research. The key with implementation science is focusing on “how to do it” rather than “what to do.” As a result, the outcomes of interest are not those typically considered in outcomes research such as mortality or morbidity. To study implementation, we assume that the “best practice” treatment is already known. Implementation science focuses on how to obtain sustained use of the best practice treatment in real-world settings. Implementation research is the study and use of strategies to adopt and integrate evidence-based health interventions into clinical and community settings in order to improve patient outcomes and benefit population health. Dissemination research is the study of targeted distribution of information and intervention materials to a specific public health or clinical practice audience, with the intent to understand how to best spread and sustain knowledge. De-implementation is the study of systematic processes to remove unnecessary or low-value care. Implementation science is similar in some ways to quality improvement, which is a process well known to the trauma community. However, quality improvement is typically performed more locally within a single hospital or healthcare organization to improve healthcare services to patients, whereas implementation science aims to develop new knowledge that will be more generalizable. These overlapping principles help to bridge the gap between research and the patient experience of healthcare.

    Traumatic injury is a common disease. It is the leading cause of death up to age 44 years, and survivors may also suffer severe disability. Because of the widespread and urgent nature of trauma, injured patients are treated at nearly every hospital across the country and around the world. This ubiquity poses a significant concern about the variation in care for injured patients nationwide. Although some aspects of care are standardized and routine, others are likely subject to major variations. The initial approach to the care of the injured via the American College of Surgeons Advanced Trauma Life Support course is well known and likely followed frequently.2 However, in other aspects of care, patients are unlikely to receive all appropriate interventions. Trauma systems have developed to ensure that hospitals designated as trauma centers have the appropriate resources and personnel to care for complex patients as well a robust quality improvement programme. However, there are many hospitals that are not part of the trauma system, and trauma center designation alone does not ensure the rapid uptake of best practices. Implementation science can help identify the lapses in care and help promote and promulgate best practice to reach a larger group of patients.

    What is the gap?

    Research, even when convincing and ground-breaking, is only as good as its adoption into routine clinical practice. Current estimates are that it takes, on average, 17 years for research findings to become standard clinical practice.3–5 There are multiple excellent examples of “best practices” in surgery, but implementation of these practices has been less studied. In fact, study of best practices in real-world environments can reveal flaws in the recommended practices. However, it may also be true that the implementation must be improved on to reach effective changes in outcomes.

    In many studies of efficacy of using a treatment, the treatment is assumed to be well implemented, and data on clinical outcomes are assumed to reflect the true treatment effect. In this scenario, if a treatment is poorly implemented, the researchers conclude that the treatment is ineffective. The challenges and barriers to adoption of practice can be exemplified by examining the evolution of WHO’s Surgical Safety Checklist (WHO-SSC).6 Hull et al described this as a case study for implementation, to show why efforts to implement evidence-based interventions often fail to replicate the pattern of improvements described in the initial study.7 It should be of no surprise that efficacy shown in a controlled environment (ie, randomized clinical trial (RCT)) does not always translate directly to real-world effectiveness.

    In 2009, the WHO-SSC was published by Haynes et al. This study described reductions in mortality and morbidity following the introduction of a 19-item surgical checklist across eight countries.6 The WHO-SSC was rapidly and widely implemented, but the efficacy was highly debated. Multiple studies within a range of settings have implemented the WHO-SSC and corroborated initial findings of reduced mortality, complications, hospital length of stay, and improved teamwork and adherence to safety processes.6 8 9 However, these findings have not been universally replicated, revealing a debate as to whether the practice itself is flawed or whether limitations are embedded within the implementation of the practice.10 11 Implementation is not a simple process, even for a seemingly “simple” intervention with dramatic reported improvements in outcomes. One study of implementation of the WHO-SSC showed only 31% compliance with the initial “Sign in,” and 48% compliance with “time out” procedures after a hospital started using the checklist.12 Even now, 10 years after the initial publication of the study, debate remains about the benefits of the WHO-SSC, embodied by a recent “Head to Head” publication in the British Medical Journal, debating whether the WHO Surgical Safety Checklist was “hyped.”13 While is it paramount to the scientific process to continually question dogma, implementation science studies must be performed to assess the contribution of implementation barriers to efficacy studies that seek to bring practices to imperfect environments. Without more knowledge about implementation, it will be impossible to determine whether best practices are truly effective, and how to bring widespread adoption of those practices to the front line.

    Surgeons, particularly trauma surgeons, are conditioned to benchmark against quality indicators and follow protocols. Therefore, methods that create evidence-based programs often gain traction. These take many forms, including quality programs, practice management guidelines, and verification programs. Examples include the American College of Surgeons’ Strong for Surgery14; practice management guidelines such as those from the Eastern Association for the Surgery of Trauma (EAST)15; as well as the trauma community’s key doctrine document, the Optimal Resources for Trauma Care.16 These resources documents tell us what to do, but not how to do it. As a result, hospitals and trauma centers are often left to “reinvent the wheel” and problem-solve in silo environments to meet these “best practices.” Although many hospitals use passive methods such as education in an effort to spread best practices, this approach may be flawed and has not been shown to be effective.17 A concerted effort from researchers to study effective implementation will identify those practices that are generalizable among communities and help to bridge the gap between academic knowledge and patient care.

    Outcomes of interest in implementation science

    For clinicians and researchers to effectively study “how” instead of “what,” we need to describe the outcomes of interest to be used as metrics for success. Proctor et al created one widely used framework for this, identifying three possible outcome categories that are separate but interrelated: implementation outcomes, service outcomes, and client/health outcomes.18 19 Client/health outcomes describe the typical study outcomes of health research including health status and symptoms, satisfaction, and function of an individual. Service outcomes examine the effect of the intervention on a health system by examining efficiency, safety, effectiveness, equity, and patient-centeredness.

    Implementation outcomes are outlined in table 1, and this taxonomy includes acceptability, adoption, appropriateness, costs, feasibility, fidelity, penetration, and sustainability. These outcomes were developed to achieve consistency with existing literature and definitions but allow researchers to separate concepts to better describe barriers and successes in different phases of implementation. They address implementation from a variety of perspectives, including provider, consumer, organization/institution/setting, and administrator. This work must be multidisciplinary to truly understand the impact via different lenses. Measurements of different outcomes may also be dependent on the particular implementation point or particular outcome of interest. For example, acceptability may be measured at the provider or consumer level with surveys or qualitative interviews, adoption may be measured at the provider or organization level with administrative data or survey data, and feasibility may be measured at the organization level using case audits. While these types of studies have not been very common in the trauma literature, some good examples do exist, including the “Pediatric Guideline Adherence and Outcomes (PEGASUS) programme in severe traumatic brain injury,” which specifically reported on program adoption, penetration, and fidelity.20 The “Enhanced Peri-Operative Care for High-risk patients” (EPOCH) trial studied the implementation of a quality improvement program to improve outcomes in patients undergoing emergency abdominal surgery in 93 hospitals in the UK.21 The authors report data on numerous aspects of implementation science including acceptability, adoption, appropriateness, fidelity, and process evaluation.

    Table 1

    Outcomes for implementation science

    Frameworks for implementation science

    Numerous frameworks for motivating and driving change in healthcare have been proposed in the literature.22 They are often encountered in the quality improvement literature, but there is clear overlap in some of these concepts with the field of Implementation Science. Some commonly used approaches often seen in the quality improvement realm include Lean Six Sigma and Plan-Do-Study-Act (PDSA).

    The Translating Evidence Into Practice (TRIP) model is a well-known approach to change that has been successfully applied to implementation science for more than a decade.23 TRIP is a four-step implementation framework that can be customized to either large-scale or small-scale interventions. In brief, the TRIP model has four main steps: (1) summarize the evidence, (2) identify local barriers to implementation, (3) measure performance, and (4) ensure all patients reliably receive the intervention. The fourth step is an ongoing iterative process composed of four key components: Engage, Educate, Execute, and Evaluate. The model is intuitive and easy to explain to all audience levels, even those without expertise in quality improvement or implementation science. It has been cited well over 250 times and has served as the model for numerous large-scale funded research collaboratives and projects.

    The most frequently cited model for evaluating and reporting implementation science work is the Consolidated Framework for Implementation Research (CFIR).24 CFIR covers five major constructs by domain to consider when assessing barriers and facilitators of implementation: (1) intervention characteristics, (2) outer setting, (3) inner setting, (4) characteristics of individuals, and (5) process. The CFIR can help plan collection and analysis of both qualitative and quantitative data. Qualitative guides cover interviews, observations, and meeting notes. For example, there are suggested approaches to semi-structured interviews or focus groups with physicians, nurses, patients, and key stakeholders to identify existing or potential barriers and facilitators when implementing the new practices. Quantitative data may include scores on scales validated to examine concepts such as organizational readiness to change and Organizational Change Manager scores.25 While CFIR is the most widely adopted, there are other tools such as the Implementation Science Research Development (ImpRes) offered to help design high-quality implementation research.26

    The most commonly accepted implementation science conceptual framework to report outcomes is RE-AIM, which operationalizes outcomes into five main ideas: reach, effectiveness, adoption, implementation, and maintenance.27 28 The RE-AIM goal is to “encourage program planners, evaluators, readers of journal articles, funders, and policy-makers to pay more attention to essential program elements including external validity that can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions.” It helps professionals to consider strengths and weaknesses of different interventions to improve clinical care. RE-AIM helps to answer critically important questions including:

    • How do I reach the targeted population with the intervention?

    • How do I know my intervention is effective?

    • How do I develop organizational support to deliver my intervention?

    • How do I ensure the intervention is delivered properly?

    • How do I incorporate the intervention so that it is delivered over the long term?

    RE-AIM has been used in the trauma setting primarily as it relates to injury prevention programs. More recently, it is starting to be used for other topics such as in evaluating a clinical decision support tool for pediatric head trauma.29

    Methodology

    Study designs

    While the routine types of research study design should be familiar to many trauma researchers, implementation science methods may be a newer concept. Performing interventional studies such as RCTs or non-interventional prospective observational studies will give trauma researchers a good background on which to build their research skills. The addition of health services research methodology, including approaches such as regression modeling to control for confounding, helps to grow the overall foundation. Some implementation science research can look very much like health services research, especially projects that use “natural experiments” or interventions to study changes in outcomes as noted above. For example, typical quality improvement articles often examine change in clinical outcomes before versus after an intervention. These papers have a common format and are often straightforward to write and read.30 Implementation science papers may have a similar setup, but use implementation outcomes (ie, adoption or feasibility) instead of clinical outcomes such as mortality.

    Clinical trials can be performed as part of implementation science research, and similar to other trials, they are at the higher end of the evidence pyramid and supply stronger evidence. One of the key differences of implementation science trials versus other RCTs is the level of assignment or randomization. In a typical RCT, individual patients are assigned to one of two treatment arms. However, in implementation science, the level of assignment is often at a larger scale than individual patients. Cluster randomized trials might randomize at the floor, unit, clinic, or hospital level. For example, a cluster randomized trial examining the effectiveness of nurse education to improve venous thromboembolism prevention in hospitalized patients randomly assigned 21 floors within a hospital to one of two educational interventions.31 The benefits of cluster randomization may include the ability to study interventions that cannot be given to only selected patients (such as nurse education), and the ability to prevent contamination between individuals (ie, all nurses working together on the same floor receive the same intervention). The level of analysis might then follow at the same level of randomization, although this is not necessary. Outcomes for a typical RCT are routinely analyzed at the patient level if clinical outcomes are being studied. However, in the implementation science space, outcomes at the unit level (ie, adoption, appropriateness, fidelity, penetration, and/or sustainability) are often reported.

    Another commonly used study design is the stepped-wedge cluster randomized trial.32 In this design, all enrolled clusters (ie, units, floors clinics, hospitals) eventually receive the intervention. This is accomplished via random and sequential crossover of clusters from control to intervention until all units have been exposed. Each cluster will act as its own historic control. This design is especially powerful when there is heterogeneity among clusters.33 An excellent example of this trial design in trauma surgery is an ongoing study of the delivery of high-quality screening and intervention for post-traumatic stress disorder and comorbidities in adult trauma patients.34

    Implementation science usually requires a multidisciplinary research team. Qualitative and mixed-methods studies often benefit from individuals not usually included in traditional surgical research teams such as social scientists, medical anthropologists, human factors engineers, behavioral scientists, and health economists. Stakeholder perspectives from all possible angles are beneficial to improve these types of projects. Implementation science teams might need frontline partners such as administrators, as well as physicians, nurses, and other clinical providers whose practice will be involved or related to the intervention.

    Methodological examples

    Our ability to study and pinpoint barriers to the implementation of evidence-based measures and best practices is dependent on selecting context-relevant methodology. Once the outcomes of interest and framework are identified, the next step is to match the ideal study methodology. We provide two real-life examples rather than outline an exhaustive list, understanding that specific clinical scenarios will reveal different barriers that require tailored methodologies. The first example uses a mixed-methods approach to study emergency surgical care in northern Uganda; the second, the development of a best-practices model in hospital-based violence intervention.

    Soroti Regional Referral Hospital in northern Uganda serves more than 2 million people and eight districts, leading to about 260 surgical referrals monthly.35 As one might expect, obstacles to life-saving surgical care, as in many low-income countries, are prevalent but poorly understood. The challenge of outlining these obstacles requires a number of methods under the umbrella of implementation science. For this study, a mixed-methods approach provided the greatest detail that can be presented to key stakeholders as a first step in improving essential surgical care in a resource-limited setting. The Surgeons Overseas’ Personnel, Infrastructure, Procedures, Equipment and Supplies (PIPES) Survey, with 105 variables in five domains, was used to reveal deficiencies in both workforce and infrastructure that allowed targeted intervention for improvement (available at https://www.surgeonsoverseas.org/resources/).36 These results were combined with process mapping, or Time and Motion Studies, to pinpoint issues with access to urgent surgical care. Large patient volume was found to account for the greatest delay to timely care. Finally, qualitative analysis was performed after focus groups of key stakeholders and healthcare providers were conducted. This valuable information corroborated some of the PIPES data but also highlighted the strength of the attendant (family) care, and the nature of the determination and ingenuity of the team of providers as two other key components driving change. This example addresses adoption, appropriateness, and feasibility of improvements to emergency surgical care in Uganda.

    The second “real life” vignette involves a public health approach to hospital-based violence intervention.37–39 The hospital-based violence intervention program at San Francisco General Hospital has developed fidelity over the past decade to retain the components and conduct that lead to successful outcomes. These include reduction in injury recidivism, programmatic capacity to address social determinants of health, and victims’ perceived value of the program. This group also demonstrated that the program could be implemented at another trauma center, studying barriers of transfer with the goal of identifying barriers to feasibility, maintaining fidelity and sustainability of this program on a larger scale.

    Studying implementation of programs requires operationalizing outcome measures to determine if the program meets success metrics for all stakeholders. For the hospital-based violence intervention program referenced above, a variety of implementation outcomes were studied in addition to clinical outcomes. The program’s clinical benefits were evaluated by examining whether the program met the stated needs of the community and by recording injury and criminal recidivism rates. Accessibility and adoption outcomes were studied using formative and process evaluations. These investigated if the program successfully screened, enrolled, and retained the target population. Qualitative semi-structured interviews of patients were used to describe appropriateness and acceptability of these programs by end-users. Qualitative methods were also used to examine barriers to care from the perspectives of key stakeholders such as city government officials, private sector executives, hospital staff, and community-based organizations, which addressed acceptability and feasibility. Cost-analysis studies were performed to ensure that this type of public health programming would not be financially onerous and therefore would have reasonable sustainability. Lastly, this program was adopted at another institution; studying the portability of this program examined the strength of the program’s ability to be adopted and implemented in additional settings.

    De-Implementation

    On the other side of implementation science lays the concept of de-implementation, or removal of harmful or unnecessary practices. These efforts should be systematic and should end the use of low-value care, whether or not an alternative is available. However, de-implementation is likely underappreciated in the literature, as there is cognitive bias against removing a treatment from a paradigm. De-implementation can be considered an implicit part of implementation and organizational change, although the strategies required are often different.40 One conceptual model describes four main types of de-implementation change: partial reversal, complete reversal, related replacement, or unrelated replacement. In clinical practice, de-implementation does occur, and the extent to which treatments are de-implemented and the processes by which de-implementation is successful should be studied.

    Partial reversal changes the frequency, breadth, or scale of an outmoded intervention to provide the intervention to only a subgroup of patients or at a longer interval. In the trauma bay, selective placement of tubes (rather than fingers and tubes in every orifice) is a start. Selective use of plain radiographs—such as eliminating x-ray of the pelvis in selected patients who will be undergoing CT scan of the abdomen and pelvis—can save time, money, and radiation exposure.41 Decision rules allowing selective use of imaging for cervical spine clearance (ie, NEXUS and Canadian c-spine rules) are other good examples of partial reversal.42 43Complete reversal or discontinuation without replacement can also occur. If an intervention has been shown to have no benefit to any subgroup on any timeframe, the practice can be completely eliminated. One example of complete reversal in trauma practice is the complete discontinuation of the use of steroids for routine treatment of spinal cord injury.44 45 Another strong push for reversal is in the clearance of the cervical spine in obtunded or intoxicated adult blunt trauma patients based on high-quality CT scan alone rather than with MRI.46 47 Despite a preponderance of papers suggesting this approach, there remains much variation in practice48—a topic ripe for de-implementation science studies.

    Reversal with a related replacement or substitution allows the use of a related or more effective clinical practice. For example, in trauma, low-molecular-weight heparin has replaced unfractionated heparin for standard prophylaxis against deep vein thrombosis for most trauma patients.49 The fourth type of de-implementation includes reversal with an unrelated replacement. One example of this within trauma surgery is the evolution of treatment of splenic laceration with embolization instead of surgery, a procedure that allows for splenic salvage as well as preservation of splenic function without major abdominal surgery.50

    De-implementation does occur in clinical practice but is often not studied with the rigor that we study other scientific changes. The study of de-implementation is an opportunity to ensure that ineffective practices do not reach our patients.

    Conclusions

    The CNTR is a broad coalition of US-based national organizations and professional societies brought together to focus attention on the significant public health problem of traumatic injury. CNTR aims to advocate for consistent and significant federal funding for trauma research commensurate with the injury burden in the USA.51 Currently, there is significant room for improvement for major funding in all areas of trauma research. Funding opportunities exist in the realm of implementation science, and this is a major frontier to which the trauma research community can be primed to make a significant impact. More and more large-scale funding opportunities for implementation science research are being offered by the National Institutes of Health, Agency for Healthcare Research and Quality, the Veterans Affairs system, the Patient-Centered Outcomes Research Institute, and other large national organizations.

    Basic, clinical, and translational science research have been the backbone of trauma research for decades. We are not advocating to stop doing these types of research. Only by these investigations will we discover new drugs, surgical or procedural therapies, diagnostic tests, and cutting-edge care for patients. However, we implore the trauma research community to also embrace other frontiers of research including implementation science in order to learn how to best bring the right care to the right patient in the right place at the right time.

    Acknowledgments

    The authors greatly appreciate the ongoing financial support of The Coalition for National Trauma Research Scientific Advisory Committee (CNTR-SAC) from the following organizations: American Association for the Surgery of Trauma (AAST), American College of Surgeons (ACS), American College of Surgeons Committee on Trauma (ACS-COT), Eastern Association of the Surgery of Trauma (EAST), National Trauma Institute (NTI), and Western Trauma Association (WTA).

    References

    Footnotes

    • Collaborators Coalition for National Trauma Research Scientific Advisory Committee: Saman Arbabi, MD FACS1; Eileen Bulger, MD FACS1; Mitchell J. Cohen, MD FACS2; Todd W. Costantini, MD FACS3; Marie M. Crandall, MD, MPH FACS4; Rochelle A. Dicker, MD FACS5; Elliott R. Haut, MD, PhD FACS6-8; Bellal Joseph, MD FACS9; Rosemary A. Kozar, MD, PhD FACS10; Ajai K. Malhotra, MD FACS11; Avery B. Nathens, MD, PhD, FRCS, FACS12; Raminder Nirula, MD, MPH FACS13; Michelle A. Price, PhD, MEd14; Jason W. Smith, MD FACS15; Deborah M. Stein, MD, MPH FACS FCCM16; Ben L. Zarzaur, MD, MPH FACS1. From the: 8. University of Washington; 9. University of Colorado; 10. UC San Diego School of Medicine; 11. University of Florida College of Medicine Jacksonville; 12. University of Arizona; 13. University of Maryland; 14. University of Vermont; 15. University of Toronto; 16. University of Utah; 17. National Trauma Institute; 18. University of Louisville; 19. University of California–San Francisco; 20. University of Wisconsin School of Medicine and Public Health.

    • Contributors All authors have contributed substantially and all members of the CNTR SAC have approved the submission.

    • Funding VH was supported by the Case Western Reserve University/Cleveland Clinic CTSA, via NCATS KL2TR000440. This publication was made possible by the Clinical and Translational Science Collaborative of Cleveland, KL2TR000440 from the National Center for Advancing Translational Sciences (NCATS) component of the National Institutes of Health and NIH roadmap for Medical Research. ERH is/was primary investigator of contracts from The Patient-Centered Outcomes Research Institute (PCORI), entitled “Preventing Venous Thromboembolism: Empowering Patients and Enabling Patient-Centered Care via Health Information Technology” (CE-12-11-4489) and “Preventing Venous Thromboembolism (VTE): Engaging Patients to Reduce Preventable Harm from Missed/Refused Doses of VTE Prophylaxis” (DI-1603-34596). ERH is primary investigator of a grant from the Agency for Healthcare Research and Quality (AHRQ) (1R01HS024547) entitled “Individualized Performance Feedback on Venous Thromboembolism Prevention Practice,” and is a co-investigator on a grant from the NIH/NHLBI (R21HL129028) entitled “Analysis of the Impact of Missed Doses of Venous Thromboembolism Prophylaxis.” ERH is supported by a contract from The Patient-Centered Outcomes Research Institute (PCORI), “A Randomized Pragmatic Trial Comparing the Complications and Safety of Blood Clot Prevention Medicines Used in Orthopedic Trauma Patients” (PCS-1511-32745). ERH receives research grant support from the DOD/Army Medical Research Acquisition Activity and has received grant support from the Henry M. Jackson Foundation for the Advancement of Military Medicine (HJF). ERH receives royalties from Lippincott, Williams, Wilkins for a book—"Avoiding Common ICU Errors." ERH was the paid author of a paper commissioned by the National Academies of Medicine titled “Military Trauma Care’s Learning Health System: The Importance of Data Driven Decision Making,” which was used to support the report titled, “A National Trauma Care System: Integrating Military and Civilian Trauma Systems to Achieve Zero Preventable Deaths After Injury.”

    • Disclaimer The contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.

    • Competing interests None declared.

    • Patient consent for publication Not required.

    • Provenance and peer review Not commissioned; internally peer reviewed.