Article Text

Download PDFPDF

Opportunities to improve clinical summaries for patients at hospital discharge
  1. Erin Sarzynski1,2,
  2. Hamza Hashmi3,
  3. Jeevarathna Subramanian3,
  4. Laurie Fitzpatrick1,
  5. Molly Polverento1,2,
  6. Michael Simmons4,
  7. Kevin Brooks2,
  8. Charles Given1,2
  1. 1Department of Family Medicine, Michigan State University College of Human Medicine, East Lansing, Michigan, USA
  2. 2Institute for Health Policy, Michigan State University College of Human Medicine, East Lansing, Michigan, USA
  3. 3Grand Rapids Medical Education Partners, Grand Rapids, Michigan, USA
  4. 4Sparrow Health System, Lansing, Michigan, USA
  1. Correspondence to Dr Erin Sarzynski, Michigan State University College of Human Medicine, 965 Fee Road, East Lansing, MI 48824, USA; erin.sarzynski{at}hc.msu.edu

Abstract

Background Clinical summaries are electronic health record (EHR)-generated documents given to hospitalised patients during the discharge process to review their hospital stays and inform postdischarge care. Presently, it is unclear whether clinical summaries include relevant content or whether healthcare organisations configure their EHRs to generate content in a way that promotes patient self-management after hospital discharge. We assessed clinical summaries in three relevant domains: (1) content; (2) organisation; and (3) readability, understandability and actionability.

Methods Two authors performed independent retrospective chart reviews of 100 clinical summaries generated at two Michigan hospitals using different EHR vendors for patients discharged 1 April –30 June 2014. We developed an audit tool based on the Meaningful Use view-download-transmit objective and the Society of Hospital Medicine Discharge Checklist (content); the Institute of Medicine recommendations for distributing easy-to-understand print material (organisation); and five readability formulas and the Patient Education Materials Assessment Tool (readability, understandability and actionability).

Results Clinical summaries averaged six pages (range 3–12). Several content elements were universally auto-populated into clinical summaries (eg, medication lists); others were not (eg, care team). Eighty-five per cent of clinical summaries contained discharge instructions, more often generated from third-party sources than manually entered by clinicians. Clinical summaries contained an average of 14 unique messages, including non-clinical elements irrelevant to postdischarge care. Medication list organisation reflected reconciliation mandates, and dosing charts, when present, did not carry column headings over to subsequent pages. Summaries were written at the 8th–12th grade reading level and scored poorly on assessments of understandability and actionability. Inter-rater reliability was strong for most elements in our audit tool.

Conclusions Our study highlights opportunities to improve clinical summaries for guiding patients' postdischarge care.

  • Medication safety
  • Health services research
  • Patient-centred care
  • Patient education

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Prompted by the Health Information Technology for Economic and Clinical Health Act of 2009, hospitals in the USA are eligible to receive incentive payments from the Centers for Medicare and Medicaid Services (CMS) by using ‘certified’ electronic health record (EHR) technology to achieve Meaningful Use (MU) objectives.1 ,2 Implemented in three stages, the overall goal of MU is to use EHRs to engage patients and families and improve care coordination and clinical outcomes.3 Among several MU objectives in stages 1 and 2, eligible hospitals must provide patients the ability to view, download and transmit (VDT) information about their hospital stays.4 ,5 During early versions of stage 1, hospitals could attest to this objective by distributing EHR-generated ‘clinical summaries’ (paper documents) to patients during the hospital discharge process.6 Clinical summaries are provider-to-patient documents, as opposed to ‘discharge summaries’ or ‘summary of care documents’, which are provider-to-provider documents. While MU no longer requires clinical summaries, their provision is becoming standard of care for hospital discharges across the USA.7 Thus, clinicians have an additional opportunity to promote self-management by reviewing clinical summaries with patients and their caregivers during the hospital discharge process.

Clinical summaries are document templates that auto-populate clinical information by extracting data from various sections of patients' EHR charts and soliciting additional information in the form of template headings. EHR vendors offer healthcare systems the opportunity to customise clinical summary templates according to local preferences, which clinicians can modify manually or by point-and-click menu options. Required elements of the clinical summary were originally defined by stage 1 for eligible hospitals (core objectives #11 and #12), which was updated in 2014 and became the VDT objective.4–8 The intended purpose of clinical summaries is twofold: (1) to summarise patients' hospital stays; and (2) to provide patients and caregivers the information necessary to self-manage and navigate their postdischarge care. This document is critical, since prior studies indicate that patients' understanding of key aspects of postdischarge care is poor.9–11 Moreover, refining the content and organisation of discharge documentation, ideally with patient input, is key to improving transitional care for vulnerable patients, including older adults and those with limited health literacy.12–16

The overall goal of our study was to assess clinical summaries in three key domains relevant to guiding patients' postdischarge care: (1) content; (2) organisation; and (3) readability, understandability and actionability. These domains provide a framework for assessing how patients may perceive the educational tools they receive during the hospital discharge process. To accomplish this goal, we assessed clinical summaries produced by two different commercially available EHR vendors, which were customised at two different hospitals (herein denoted hospital/vendor A and hospital/vendor B).

Methods

Study design and sample

This pilot study was a retrospective chart review of clinical summaries produced at two Michigan hospitals using different commercially available EHR systems. Both hospitals used 2011–2014 hybrid editions of certified EHR technology during the evaluation period and reported on the same stage 1 MU view–download–transmit criteria.4 Eligible patients were ≥18 years old and discharged home from academic internal medicine services from 1 April to 30 June 2014. We excluded patients hospitalised under observation status and those discharged to care facilities.

The academic internal medicine units at both hospitals include physicians who rotate on and off service. The academic medicine unit at hospital A has 12 attending physicians (rotate every 2 weeks) and 36 resident physicians (rotate every 4 weeks) to cover four services. Each service consists of one attending physician and three resident physicians. The academic medicine unit in hospital B has five attending physicians and 36 resident physicians (rotate every 4 weeks) to cover one service, which consists of one attending physician and two resident physicians. On average, each academic service at hospital A discharges 45 patients per month, and the academic service at hospital B discharges 35 patients per month. Resident physicians perform discharges at both hospitals, and this workflow generates clinical summaries ‘behind the scenes’, which bedside nurses print and review with patients before discharge. We identified more than 100 eligible subjects at each institution during the 3-month sampling period. We sorted eligible subjects alphabetically by last name in 2-week increments (first 2 weeks of each 4-week block), and performed 129 sequential chart reviews (66 at hospital A and 63 at hospital B) until we identified 50 that met criteria at each institution (n=100). This sampling scheme allowed for the greatest diversity in discharging providers (n=9 and n=6 for hospital A and hospital B, respectively).

Measures

Audit tool

Clinical summaries represent the product of an MU objective and its implementation in clinical practice. Thus, we developed an audit tool by merging MU standards with national guidelines for transitional care and validated tools to assess patient educational materials. Specifically, we assessed clinical summary content, organisation, readability, understandability and actionability. One author at each site (ES and HH) printed and de-identified clinical summaries for auditing and abstracted relevant demographic and health variables. In a subsequent step, two non-clinician authors (LF and MP) performed independent audits based on the tool designed by the senior author (ES). Specifically, the senior author demonstrated how to apply the audit tool (detailed below) to clinical summaries from each of the two sites and provided annotated examples to use as references. Finally, a different author (KB) conducted analyses to assess inter-rater reliability.

Content

We assessed content according to the MU VDT objective by selecting a subset of items with face validity for informing patients' postdischarge care.4 ,5 We excluded seven items from the VDT objective due to their lack of face validity for informing patients' postdischarge care, including allergy list, vital signs at discharge, lab results at discharge, summary of care document, care plan, demographics and smoking status. Two non-clinician authors (LF and MP) each independently assessed the presence or absence of the following eight content items: patient name, admission/discharge date and location, reason for hospitalisation, inpatient care team, procedures performed during admission, problem list, medication list and discharge instructions. Finally, three content items from a discharge checklist endorsed by the Society of Hospital Medicine were included: (1) follow-up appointments scheduled before discharge; (2) advise patients about anticipated problems (‘red flags’); and (3) provide a specific 24/7 call-back phone number in case of immediate postdischarge needs.17 We included these content items because they are common and important elements of transitional care, but not included among the criteria necessary for hospitals to meet the VDT objective. We defined ‘discharge instructions’ as either (1) manually entered by clinicians; or (2) inserted from a third-party source (eg, generic or ‘boiler plate’ instructions for specific medical conditions). Reviewers gave no credit for generic instructions unrelated to patients' primary reason for hospitalisation (eg, if you are a smoker, we encourage you to quit). For each clinical summary, two authors (LF and MP) evaluated ‘yes/no’ whether each of the 11 content items appeared in the document.

Organisation

We assessed organisation according to the Institute of Medicine's recommendation to promote health literate healthcare organisations.18 We selected two criteria relevant to organising patient educational materials from the Eighth Attribute, which states, ‘A health literate health care organization designs and distributes print, audiovisual, and social media content that is easy to understand and act on’. Specifically, we selected (1) focus on a limited number of messages; and (2) sequence information in a logical order (ie, primary diagnosis listed first). Two authors (LF and MP) assessed the number of unique messages, defined as each of the 11 content criteria (defined above), plus any of the following: hospital logo and mission statement, home healthcare referrals, follow-up laboratory or radiology requisitions, inventory of patient belongings, personalised instructions to access EHR-tethered patient portal, generic discharge instructions (eg, call 911 if you have chest pain), signature section, or any other section demarcated by a change in spacing or font. The same two authors evaluated ‘yes/no’ whether clinical summaries listed the primary diagnosis (identified in the provider-to-provider discharge summary) first among patients' comorbid conditions.

Readability, understandability and actionability

We calculated readability scores using Health Literacy Advisor (HLA) software, a Microsoft Word plug-in.19 Two authors (ES and HH) copied/pasted clinical summaries from their respective EHRs into Word, which was necessary to perform automated readability assessments. Moreover, two authors (MS and ES) confirmed that the copy/paste process did not affect performance of automated readability assessments, since HLA software does not require document prepping prior to analysis, which is a benefit compared with build-in readability software.20 Authors selected five readability scales due to their prevalence, prior validation and relevance to health communication: (1) Simple Measure of Gobbledygook (Precise SMOG); (2) Fry-based Electronic Readability Formula; (3) FORCAST Readability Scale; (4) Flesch–Kincaid Grade Level and (5) the Flesch Reading Ease.20 Authors chose to assess clinical summaries using several readability formulas, since documents contain a mixture of the components emphasised in each scale: short-syllable and long-syllable words, sentences and paragraphs, as well as bulleted lists and tables. The Precise SMOG assesses the frequency of polysyllabic words and is well suited for healthcare applications because of its consistent results and higher level of expected comprehension.21 The Fry-based Electronic Readability Formula assesses the average number of sentences and syllables per 100 words. The FORCAST Readability Scale assesses the number of single-syllable words per 150 (ideal for lists). The Flesch–Kincaid Grade Level assesses the average number of syllables per word and the average number of words per sentence. The first four scales estimate readability based on traditional grade levels, while the Flesch Reading Ease assesses material on a 0–100 scale (higher scores indicate improved readability).20

Lastly, we used the Patient Education Materials Assessment Tool (PEMAT) to evaluate clinical summary understandability and actionability.22 This Agency for Healthcare Research and Quality–endorsed toolkit instructs assessors to ‘agree’ or ‘disagree’ with up to 26 statements about educational materials (only 24 relevant to print materials). Scores range from 0% to100%, with higher scores indicating that the material is easier to understand and act on. The PEMAT demonstrates strong internal consistency, reliability and construct validity.23 Following careful review of the PEMAT User's Guide, two non-clinician authors (LF and MP) independently evaluated each of the 100 clinical summaries and assessed them according to the PEMAT scoring rubric. Michigan State University and its affiliate hospitals' Institutional Review Boards approved our protocol.

Statistical analysis

We generated descriptive statistics for each site according to metrics in our audit tool, reported as means and ranges for continuous variables and frequencies and percentages for categorical variables. We assessed inter-rater reliability using Spearman's rank correlation coefficient (ρ) for discrete variables and Cohen's κ for categorical variables.24–26 Since κ depends on the prevalence of attributes, it deteriorates when contingency tables contain too many zeros. In such instances, we calculated the prevalence-adjusted κ, which is a better estimate of the true nature of reviewer agreement.27 Moreover, we generated graphic displays of readability assessments (from HLA software) by estimated grade level. We performed all analyses using JMP for SAS V.12.1 (Cary, North Carolina, USA).

Results

Discharged patients were middle-aged and had multiple chronic conditions (table 1). On average, clinical summaries were six pages, but the number of pages varied considerably (range 3–12, table 1). De-identified examples are available for hospital/vendor A (see online supplementary appendix 1) and hospital/vendor B (see online supplementary appendix 2).

Table 1

Demographics of patient population and average length of their clinical summaries

Content

Some MU elements were universally auto-populated into clinical summaries (problem lists and medication lists), while other key elements were never included (inpatient care team; table 2). Most—but not all—clinical summaries contained discharge instructions (77% and 93% at hospitals A and B, respectively). At both sites, clinicians were more likely to insert third-party generic patient educational materials than to manually enter personalised discharge instructions (table 2). Elements endorsed by the Society of Hospital Medicine were inconsistently included. The percentage of follow-up appointments scheduled prior to discharge was higher at hospital B than hospital A (43% and 17%, respectively), which may reflect differences in discharge planning policies at the two sites. Only half of the summaries included condition-relevant ‘red flags’ to watch for after discharge. However, templates produced at both institutions auto-populated generic warnings (eg, face, arms, speech, time (FAST) scale for stroke (see online supplementary appendix 1) or call 911 if you have any chest pain (see online supplementary appendix 2)). Clinical summaries universally failed to include a 24/7 call-back phone number in case patients had immediate postdischarge concerns (table 2). Inter-rater agreement was very good for most content elements (κ>0.8 or ρ>0.8), with the exception of clearly identifying the reason for hospitalisation (κ=0.57, hospital/vendor A), including disease-specific discharge instructions (κ=0.72, hospital/vendor B), and advising about ‘red flags’ (both sites, κ=0.60 and κ=0.24, respectively). Overall agreement for the selected MU VDT content elements was very good at both sites (κ=0.92 for hospital/vendor A and κ=0.95 for hospital/vendor B).

Table 2

Assessing clinical summaries for patient-centred content

Organisation

Clinical summaries contained an average of 15 unique messages at hospital A, and 12 unique messages at hospital B (table 3). Clinical summaries at hospital/vendor A contained non-clinical data (eg, inventory of patient belongings (see online supplementary appendix 1)), and those from hospital/vendor B contained duplicative medication lists (eg, one reconciled list and one consolidated list (see online supplementary appendix 2)), which may obscure key content. Less than half of clinical summaries listed the primary discharge diagnosis first (42% and 38% at hospitals A and B, respectively; table 3). Moreover, clinical summaries produced at both sites included medication lists formatted based on reconciliation mandates, generating separate medication subsections (eg, start, continue, change or stop). Inter-rater reliability was fair to moderate (ρ=0.25–0.53) for assessing the number of unique messages, and good to very good (κ=0.67–0.84) for listing the primary diagnosis first (table 3).

Table 3

Assessing clinical summaries for patient-centred organisation

Readability, understandability and actionability

Document language was universally above the recommended 6th grade reading level, averaging 8th–12th grade, depending on the scale used (figure 1). Importantly, the Precise SMOG estimates readability at 2–3 grade levels higher than the Flesh-Kincaid scale, reflecting the higher level of expected comprehension for health-related materials.21 ,28 Frequently, diagnoses auto-populated into patients' problem lists referenced an International Classification of Diseases code, thereby prompting inclusion of medical jargon. For example, one clinical summary included ‘pulmonary oedema cardiac cause’ and ‘respiratory failure with hypercapnia’ in the problem list (see online supplementary appendix 1). Documents scored poorly on PEMAT understandability (range 15%–40%) and actionability (32%–41%) assessments (table 4). Reviewers agreed that clinical summaries scored highly for using the active voice (PEMAT #5; table 4). By contrast, they universally scored summaries deficient in seven areas (κ=1.00 for all of the following): making their purpose completely evident (PEMAT #1), avoiding distracting content (PEMAT #2), using informative headers (PEMAT #9), providing information in a logical sequence (PEMAT #10), providing an overall summary (PEMAT #11), and underusing visual aids such as charts (PEMAT #26), which when present lacked clear row and column headings (PEMAT #19). Overall, inter-rater reliability was moderate for hospital/vendor A (κ=0.55 to κ=0.56) and good for hospital/vendor B (κ=0.72 to κ=0.76).

Table 4

Assessing clinical summaries for understandability and actionability

Figure 1

Clinical Summaries: Readability. Clinical summaries exceed the recommended sixth grade reading level (indicated by black horizontal line). The Precise SMOG (Simple Measure of Gobbledygook) assesses the frequency of polysyllabic words. The Fry-based Electronic Readability Formula assesses the average number of sentences and syllables per 100 words. The FORCAST Readability Scale assesses the number of single-syllable words per 150 (ideal for lists). The Flesch–Kincaid Grade Level assesses the average number of syllables per word and the average number of words per sentence. The Flesch Reading Ease assesses material on a 0–100 scale (higher scores indicate improved readability), rather than a specific grade level. Scores were 61.6 and 54.0 for hospital/vendor A and hospital/vendor B, respectively.

Discussion

The aim of this pilot study was to assess EHR-generated clinical summaries according to their content, organisation and understandability. Results highlight opportunities to improve clinical summaries for guiding patients' care following hospital discharge. Overall, we found that clinical summaries were lengthy, disorganised, lacked key content and scored poorly on assessments of understandability and actionability. While clinical summaries average six pages, they universally failed to identify members of the care team, including the discharging provider, or a specific 24/7 call-back phone number in case of problems immediately following discharge. Equally worrisome, only 40% of clinical summaries listed a patient's primary discharge diagnosis first among their list of comorbid conditions. Medication lists were organised based on reconciliation mandates, generating subsections (eg, start, continue, change or stop) rather than consolidating into a patient-centred list based on standard dosing times (eg, ‘refrigerator list’) as guidelines recommend.29 ,30 Furthermore, clinical summaries were written well above the sixth grade reading level, most scoring between the eighth and 12th grade level. Finally, clinical summaries scored poorly on assessments of understandability and actionability, with reviewers agreeing that documents scored zero on at least one third of the relevant PEMAT items at both sites (deficient in ≥8 of the 24 items for print materials).

Medication lists embedded within clinical summaries provide a clear example of the deficits identified in our study. For example, lengthy medication lists spanned multiple pages, and subsequent pages lacked column headings, which could make dosing instructions difficult to interpret (see online supplementary appendix 1, p. 3 of 9). Moreover, only 16% of medication lists containing short-acting insulin provided explicit dosing instructions. Instead, most offered a general statement to use on a ‘sliding scale’ without defined parameters (see online supplementary appendix 2, p. 6 of 12). Lastly, organising lists based on reconciliation mandates may lead to confusion when EHR systems ‘over-interpret’ differences between preadmission and discharge medication regimens. For example, note the ‘new’ versus ‘old’ dosing instructions for sevelamer (see online supplementary appendix 2, p. 8 of 12), where the only difference is ‘take with snacks’ versus ‘take with meals’.

Inter-rater agreement was very good for the ‘content’ elements included in our scoring rubric (overall κ>0.90 at both sites). By contrast, inter-rater agreement was only moderate for the ‘organisation’ elements and the PEMAT understandability and actionability scores. It is possible that reliability is lower for the PEMAT scores due to level of subjectivity in some of its measures. Since the traditional κ statistic is heavily influenced by prevalence,31 it can result in falsely low scores when variance between reviewers is zero. We observed this problem in instances when both reviewers unanimously agreed that elements were absent (eg, none of the clinical summaries identified members of patients' care teams). In such circumstances, we used the prevalence-adjusted κ, which overcomes some of the limitations of the traditional κ and more accurately reflects the true degree of inter-rater reliability.

Our work is novel because it is the first study to evaluate EHR-generated clinical summaries in the acute care setting. Notably, literature on clinical summaries exists for the outpatient setting, but not for the equivalent document provided to patients during the hospital discharge process.32–34 Moreover, these studies assess patient and provider perceptions of clinical summaries, with limited data on health outcomes, aside from self-reported medication adherence in one study.32 Regardless, our results are consistent with others in identifying opportunities to improve clinical summaries. Importantly, we agree that revisions should incorporate feedback from end-users, including providers (since their workflow generates clinical summaries) and patients, who are the ultimate recipients.

Despite the need to improve clinical summaries for patients, it is unclear how EHR vendors, healthcare systems and clinicians negotiate their overlapping responsibilities to generate and refine them. For example, starting from ‘off-the-shelf’ EHR software, how does a healthcare system optimally customise features to refine clinical summary templates? Moreover, how can clinicians ensure succinct, highly relevant patient educational tools without a prompt to preview documents before nurses print them for patients? Moving forward, we need greater transparency to understand how local hospital EHR customisation and clinician-specific workflows influence clinical summary templates to generate usable documents for patients. Ideally, future programmes will address these sociotechnical factors35—interactions between patients, providers and health information technology workflows—affecting EHR-generated products and their implementation in clinical practice.

Study limitations relate to our pilot design, including small sample size and two-site locations, which prevents broad generalisation of our results. In deciding relevant content metrics omitted by MU, we chose the Society of Hospital Medicine checklist.17 However, we acknowledge that other regulatory organisations may recommend different content elements depending on the population served. While providers can insert generic discharge instructions into clinical summaries through point-and-click menu options, it is possible to modify instructions, which is a variable we did not assess. Finally, this work did not assess patients' perceptions or understanding of their clinical summaries, nor their ability to carry out action items embedded within the documents. In the future, we plan to elicit patients' feedback for improving clinical summaries based on the domains of our audit tool: content, organisation and understandability. In addition, we will broaden the scope of our evaluation by increasing sample size and evaluating summaries from multiple institutions.

Despite these limitations, it is important to disseminate knowledge of suboptimal clinical summaries because of their broad implications. While MU no longer mandates clinical summaries, CMS continues to support their provision as a clinical best practice.36 ,37 Thus, given their evolution from MU and widespread use in US hospitals, clinical summaries demand critical evaluation to ensure that the final product optimally leverages EHR technology to improve patient care. Ideally, future incentive programmes will incorporate established recommendations, such as health literacy best practices adopted by the Re-Engineered Discharge Program (Project RED) and the Universal Medication Schedule, which promote understanding of complex medication regimens at hospital discharge.29 ,30 Integrating these and similar patient-centred principles into refined clinical summary templates could positively impact the 35 million patients discharged from US hospitals annually.38

In conclusion, we found that currently produced clinical summaries are lengthy, omit or obscure key discharge information, written at the 8th–12th grade reading level, and score poorly on assessments of understandability and actionability. Since vendors, healthcare systems and clinicians share overlapping responsibility for generating clinical summaries, they should collaborate to solicit feedback from patients (end-users) to improve their product.

Acknowledgments

The authors thank Julia Adler-Milstein, PhD at the University of Michigan and Judy Arnetz, PhD at Michigan State University for reviewing drafts of our manuscript.

References

Footnotes

  • Contributors Study concept and design: ES and CG. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: ES and CG. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: ES and KB. Administrative, technical or material support: HH, JS, LF, MP, MS and KB. Study supervision: ES and CG.

  • Funders This work was supported by the Michigan Department of Health and Human Services Contract #20151533-00 (HIT Resource Center) and Michigan State University Institute for Health Policy.

  • Competing interests ES reports income from the Center for Medical Education for her role as a commentator in Continuing Medical Education (CME) audio publications.

  • Ethics approval Michigan State University and its affiliate hospitals' institutional review boards approved our protocol.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles