Article Text

How do we PI? Results of an EAST quality, patient safety, and outcomes survey
  1. Daniel Horwitz1,2,
  2. Ryan Peter Dumas3,
  3. Kyle Cunningham4,
  4. Carlos H Palacio5,
  5. Daniel R Margulies6,
  6. Christine Eme7,
  7. Marko Bukur1,2
  1. 1Department of Surgery, NYU Langone Health, New York, New York, USA
  2. 2Division of Trauma and Acute Care Surgery, Bellevue Hospital Center, New York City, New York, USA
  3. 3Department of Surgery, UT Southwestern Medical, Dallas, Texas, USA
  4. 4Department of Surgery, Atrium Health, Charlotte, North Carolina, USA
  5. 5Trauma Department, McAllen Medical Center, McAllen, Texas, USA
  6. 6Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, California, USA
  7. 7Eastern Association for the Surgery of Trauma, Chicago, Illinois, USA
  1. Correspondence to Dr Daniel Horwitz; dlhorwitz88{at}


Background Quality improvement is a cornerstone for any verified trauma center. Conducting effective quality and performance improvement, however, remains a challenge. In this study, we sought to better explore the landscape and challenges facing the members of the Eastern Association for the Surgery of Trauma (EAST) through a survey.

Methods A survey was designed by the EAST Quality Patient Safety and Outcomes Committee. It was reviewed by the EAST Research and Scholarship Committee and then distributed to 2511 EAST members. The questions were designed to understand the frequency, content, and perceptions surrounding quality improvement processes.

Results There were 151 respondents of the 2511 surveys sent (6.0%). The majority were trauma faculty (55%) or trauma medical directors (TMDs) (37%) at American College of Surgeons level I (62%) or II (17%) trauma centers. We found a wide variety of resources being used across hospitals with the majority of cases being identified by a TMD or attending (81%) for a multidisciplinary peer review (70.2%). There was a statistically significant difference in the perception of the effectiveness of the quality improvement process with TMDs being more likely to describe their process as moderately or very effective compared with their peers (77.5% vs. 57.7%, p=0.026). The ‘Just Culture’ model appeared to have a positive effect on the process improvement environment, with providers less likely to report a non-conducive environment (10.9% vs. 27.6%, p=0.012) and less feelings of assigning blame (3.1% vs. 13.8%, p=0.026).

Conclusion Case review remains an essential but challenging process. Our survey reveals a need to continue to advocate for appropriate time and resources to conduct strong quality improvement processes.

Level of evidence Epidemiological study, level III.

  • quality improvement

Data availability statement

Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplementary information. All of the responses relevant to the study have been included in the manuscript. The raw response data are available upon reasonable request.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


  • Process improvement remains a difficult and uncomfortable process for trauma providers


  • There is a wide amount of variability and perspective present in process improvement across trauma centers.


  • Trauma surgeons and trauma medical directors should be aware of the challenges their team and institutions face when engaging in process improvement such as time commitment and non-conducive environment.


Process and quality improvement in trauma care remain core principles for every American College of Surgeons (ACS) verified trauma center. The newest edition of Resources for Optimal Care of the Injured Patient calls for ‘resources allocation (such as equipment, personnel, and administrative support), a commitment to patient safety, and an enduring focus on continuous PI’ to be prepared to care of the breadth and depth of pathology in the traumatically injured patient.1 Conducting effective performance improvement (PI) is time-consuming but remains paramount to success in trauma care. The vast amount of resources available to help guide trauma PI leads to diverse implementation strategies across verified programs. In a survey of trauma quality improvement by Zetlen et al in 2017, only 66% of providers in high-income countries identified systems improvement as the perceived objective of morbidity and mortality conferences, with 49% of providers identifying lack of time and 17% identifying lack of interest as a barrier to increased use of conferences and trauma registries.2 However, when adopted, quality improvement processes are effective. Hemmila et al examined the Michigan Trauma Quality Improvement Program and presented data to suggest that formalized quality improvement programs improve patient outcomes and decrease resource use.3 As PI is currently non-standardized and carried out in various forms, we sought to better understand how quality and PI is undertaken among trauma clinicians. The objective of this study was to survey members to understand how they execute performance improvement patient safety (PIPS) plans, resources required, frequency of meetings, participants in the process, and perceived effectiveness of the program. We hypothesized that there would be significant center-level variability in PI programs among verified trauma centers.


A REDCap survey tool was created by the Eastern Association for the Surgery of Trauma (EAST) Quality, Patient Safety, and Outcomes Committee. The survey creation team consisted of two trauma medical directors (TMDs), a trauma program manager, and the EAST Quality, Patient Safety, and Outcomes Committee chair. The questions were based on several resources standardly used in PIPS across institutions. This survey was then internally validated by circulating it among committee members to test for readability, relevance, and performance. Feedback from the committee was used to finalize the survey (see online supplemental appendix A). This survey was sent to 2511 EAST members. Members received an initial email inviting them to complete the survey and two subsequent reminders. No incentives were offered for completion of the survey. Membership categories that were surveyed included active, senior, provisional, and associate EAST members. Residents and fellows were included in the survey (approximately 3% of responses).

Supplemental material

The survey was designed to identify the setting in which the respondents practiced as well as the methods of case review and quality improvement used. It was also designed to better understand how institutions identify cases for quality improvement, the perception of the quality improvement process, and any perceived barriers to conducting effective PI. We also sought to evaluate the use of adjunctive PI measures such as trauma video review (TVR), participation in Trauma Quality Improvement Project (TQIP)/Collaboratives, and the ‘Just Culture’ program. Data analysis was performed using SPSS (version 25) using χ2 and Fisher’s exact test to analyze the categorical responses of the survey where appropriate.


The survey was returned by 151 respondents of the 2511 surveys sent (6.0% response rate). There were no incomplete responses. As seen in table 1, most of the respondents were either trauma faculty (83 of 151, 55%) or TMDs (37%) at either ACS-verified level I (62%) or level II (26 of 151, 17%) trauma centers. Most respondents practiced at academic (62%) hospitals with varying distributions of trauma volume.

Table 1

Survey response data

Other than the uniform use of Resources for Optimal Care of the Injured Patient manual (‘Orange Book’) and high participation in the Trauma Outcomes and Performance Improvement Course (TOPIC), there was a wide variety in the resources used to assist in PI (table 2).1 Academic or level I trauma centers had a greater percentage of respondents that participated in two or more courses, though this was not statistically significant (p=0.090 for academic centers, p=0.322 for level I trauma centers). The majority of respondents indicated that PIPS is performed monthly (51%) or weekly (45%). Level III (multidisciplinary peer review, 70%) is the most common level of review that is performed at the respondent institutions. Most institutions (70) identified a multidisciplinary peer review team consisting of emergency medicine (91%), orthopedics (87%), anesthesia (85%), and critical care (70%) as the most common attendees. The types of cases selected for PIPS and how they are identified are also displayed in table 2. Survey respondents indicated that there are a wide variety of indications for quality review and that they are reported by a range of healthcare providers. Approximately one-third of trauma center respondents reviewed deaths only at peer review.

Table 2

PIPS plan characteristics

The provider views on the PIPS process are displayed in table 3. Most of the respondents (92%) felt that correcting and preventing errors were major objectives of their conferences with (84%) education/literature review, obtaining different viewpoints/perspectives, and loop closure being other important outcomes. Only (9%) felt that assigning blame was an objective of the conference. There were several PIPS barriers that respondents identified. These included time constraints (53%), lack of participant engagement (42%), and limited institutional resources (41%). Twenty-one percent of respondents also stated that the PI format was not conducive to case review. When asked if corrective actions leading to change are effective after opportunities identified during PIPS conferences, 92% said yes (either fairly, moderately, or very effective) and 8% said no. Individual reflections after PIPS conferences were also assessed, with the respondents being queried on how they felt after presentation of a case they were involved in. Seventy-six percent felt they learned from potential errors or a system error was identified, while (24%) felt that the errors were inevitable, felt guilt and uncertainty about the outcome, or that blame was assigned. When stratifying PIPS effectiveness by role of respondents, we noticed some interesting findings (figure 1). TMDs were more likely than their colleagues to identify their PIPS conferences as either moderately effective or very effective compared with not being effective or fairly effective (77.5% vs. 57.7%, p=0.026).

Figure 1

Performance improvement patient safety effectiveness by role. TMD, trauma medical director.

Table 3

Provider views on PIPs process

Adjunctive PIPS processes were also evaluated (table 4). Participation in TQIP was the most common adjunctive measure in 95% of respondents, and 59% participated in a TQIP collaborative. TQIP was most frequently used to drill down on areas of weakness and create PIPS projects (76%). The majority of respondents also indicated that these data were shared in their PIPS conferences (70%) as opposed to only being viewed only by the trauma program leadership. Incorporation of the Just Culture philosophy was used by 14% of respondent programs, with just over half (56%) of the TMDs feeling that this improved the quality of their PI process. A quarter of respondents stated that they were in a stage too early to tell. There was great variety in how the Just Culture algorithm was implemented into the PIPS process. Programs with Just Culture feedback models were less likely to identify their environments as not conducive to constructive case review (27.6% vs. 10.9%, p=0.012) and to report feelings of assigning blame (3.1% vs. 13.8%, p=0.026). TVR was reported to be used by 20% of respondents, with the majority of institutions using it to record all activations. Videos that were selected for TVR were primarily used to focus on team communication, management decisions, and clinical performance. This was typically done in small group conferences (61%). The videos were selected for review by TVR faculty coordinator (32%) or any faculty (29%), trauma program managers (16%), and TMD (13%). The videos were primarily reviewed in a quality improvement conference (97%) or small group format (55%) with the goal of identifying communication errors (93.5%) and patient management errors (87%) and evaluating clinical performance (84%). The majority of respondents indicated that cases were referred to the medical examiner (ME) for autopsy. ME reports were most frequently shared in a PI conference, though in over 40% of respondents, these were reviewed only by trauma administration.

Table 4

Adjunctive PIPs measures


The ACS Committee on Trauma has set the standards of improving the quality of care for injured patients for decades. Since the introduction of the Resources for Optimal Care of the Injured Patient was first published in 1979, this has served as the metric by which centers benchmarked.4 A central component of this document is multidisciplinary PI focused on structure and process of care while monitoring patient outcomes and is recognized as one of the most frequent criterion deficiencies (CDs) that cause trauma centers to fail verification.4 5 ,6 As such, this document was uniformly cited by the respondents of this survey as a key basis for PIPS conferences. Since the inception of the resources document, a host of other resources and courses from a variety of professional societies (ie, TOPIC and Optimal) and industrial sectors (Just Culture and Six Sigma) have also shaped the current conduct of the PIPS process in trauma centers across the USA.

While it is evident that most centers participate in multidisciplinary PIPS efforts, there is variability regarding participants in these efforts. Subspecialty liaison presence is mandated through the CDs sent forth through the resources document, and those representatives are overwhelmingly represented in our survey. However, as trauma care transcends the entire hospital structure, it is surprising that consistent administrative presence is lacking in the majority of institutions. PIPS case selection method also appears to be variable, with up to a 30% of institutions focusing only on deaths and severe complications. Such methods miss the opportunity to identify systems issues until they reach the level of severe patient harm. Errors generally involve competent providers with the best intentions that are practicing in complex sociotechnical systems.7 High-reliability institutions and organizations that value safety anticipate imperfection and are committed to designing systems that minimize it. A key component of improving system design is a system of just accountability in which individuals are encouraged to report and discuss errors or near misses and are supported by the Healthcare Quality Improvement Act.8

Of the EAST members who responded to the survey, the majority viewed their quality improvement process in a positive light and as a tool for preventing errors, reviewing best practices, and discussing complex patient care cases with colleagues. Yet, our survey data reveal the tough reality of process improvement in trauma centers in the USA. Only 29% of those surveyed felt that their process was very effective, and our analysis suggests that this sentiment is skewed toward the TMDs who oversee it. More concerning still was the proportion of respondents who found their quality improvement programs to be ineffective, with a focus on assigning blame or instilling a sense of guilt over poor outcomes. Staff that perceive that they are working in a punitive or biased environment are less likely to report errors and can lead to deficiencies in the care process.7 9 Centers with impartial, well-structured peer review have positive impacts on clinical safety and performance.10 The Just Culture algorithm introduced by Outcome Engenuity11 focuses on an evidence-based peer review process that recognizes that human error is omnipresent and concomitantly evaluates system design in a fair and impartial manner.12 Adopting this algorithm requires training and supportive backing to succeed, further underscoring the importance of hospital leadership participation in quality review. Implementation of Just Culture requires time along with the willingness to allow clinical departments flexibility to move away from the traditional ‘blame’ culture of medicine. However, facilities that have undertaken this endeavor have had successful results.7 Our survey data appear to support these observations, with institutions participating in the Just Culture algorithm having a more conducive environment to support peer review dialog. Obtaining closure of peer review cases was also noted to be a challenge in our survey results. These findings mirror that of Hamad et al in a recent TQIP analysis using the anonymous Mortality Reporting System. They found that in 7.3% of the reported deaths, there was no mitigation strategy identified to prevent future occurrences from happening.13 This suggests that we are not identifying all human-level and systems-level issues in our PIPS process. These same authors also found that we tend to focus on corrective actions on the provider or care level (ie, education, counseling, and guidelines) as opposed to more effective measures of change such as process simplification and standardization of built-in redundancies, barriers, or fail safes that have more durable effects.14

Participation in other adjunctive PI measures appeared to mirror previously reported trends. As TQIP participation is a requirement for ACS verified trauma centers, it is without surprise that 95% of the respondents are involved in this endeavor. Almost 60% are also members of a TQIP collaborative which has been linked to reductions in complications and resources at a rate that is greater than using individual TQIP results alone.15 TVR use also appeared to be in line with what has been reported nationally (20% to 29%),16 17 with recording practices that appear to vary nationally. As this is an evolving tool with an ongoing EAST multicenter study, its optimal use is as of yet to be determined. Postmortem exams also continue to be used nationally and were well represented in our survey despite the criticisms that these are resource-intensive procedures and rely on local ME availability.18 Review of autopsy data as part of the peer review process has been linked with improved outcomes in the assessment of the trauma system function.19 Our survey responses appear to support this notion as the majority of reports are reviewed formally at PIPS meetings.

Our findings contain several limitations. As with any survey, ours is limited by the overall low response rate, so it offers just a small glimpse into the PI process at predominately academic level I trauma centers. The survey links were also not personalized to each participant, so it is possible that a respondent filled out the survey multiple times. It also offers little insight into why respondents may find their PIPS ineffective or judgmental as the answers were standardized by the survey instead of allowing for free responses. The current survey also is likely to be heavily based on the interpretation of the Resources for Optimal Care of the Injured Patient.1 With a new resources manual forthcoming, the applicability of these results may change as the PI section will now be more standardized with expectations more clearly delineated.1 Additionally, EAST members have increased exposure to evolving methods, such as Just Culture, through programming including the Short Course on Trauma Quality, TOPIC, and other programming at the annual scientific assembly. Access to many of the PIPS resources available appears to be influenced by the respondents’ institution, and the reasons for this are unknown. Many of the challenges identified exist on a predominately institutional level, and we should strive to continue to recognize and tailor these processes to meet those needs.


Case review can be a difficult and uncomfortable process. Our survey data appear to reflect that reality and poses a challenge to physicians and staff of trauma centers nationwide. Several resources are available to enhance the PIPS process, but their incorporation is far from standardized. Trauma leaders should advocate for the appropriate resources and time to conduct just, impartial process improvement activities and to share their collective experiences to advance our knowledge on this crucial component of our trauma system.

Data availability statement

Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplementary information. All of the responses relevant to the study have been included in the manuscript. The raw response data are available upon reasonable request.

Ethics statements

Patient consent for publication

Ethics approval

The final version of the survey was reviewed and approved for distribution by the Eastern Association for the Surgery of Trauma Research and Scholarship Committee and was exempted from institutional review board (IRB) review. This study was considered exempt by the IRB as it was a quality improvement study. The IRB determined that no consent or consent statement was required, given the nature of the survey, as it was considered a non-human subject research.


The authors acknowledge the Eastern Association for the Surgery of Trauma Quality, Patient Safety, and Outcomes Committee.


Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.


  • Contributors All authors contributed equally to this study. Specific contributions include conceptualization and study design (MB, RPD, KC, CHP, and CE; data access, data analysis, and data interpretation (MB, DH, RPD, KC, CHP, and DRM); and manuscript writing and submission (MB, DH, RPD, KC, CHP, DRM, and CE). MB is the guarantor.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.