Intended for healthcare professionals

Learning In Practice

Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine

BMJ 2002; 325 doi: https://doi.org/10.1136/bmj.325.7376.1338 (Published 07 December 2002) Cite this as: BMJ 2002;325:1338
  1. L Fritsche, senior lecturera,
  2. T Greenhalgh, professorb,
  3. Y Falck-Ytter, researcherc,
  4. H-H Neumayer, professora,
  5. R Kunz, senior lecturer (rkunz{at}uhbs.ch)a
  1. aDepartment of Nephrology, Charite-Campus Mitte, 10117 Berlin, Germany
  2. bDepartment of Primary Health Care, University College London, London N19 3UA
  3. cGerman Cochrane Centre, Institute for Medical Biometry and Medical Informatics, University of Freiburg, Germany
  1. Correspondence to: R Kunz
  • Accepted 18 October 2002

Abstract

Objective: To develop and validate an instrument for measuring knowledge and skills in evidence based medicine and to investigate whether short courses in evidence based medicine lead to a meaningful increase in knowledge and skills.

Design: Development and validation of an assessment instrument and before and after study.

Setting: Various postgraduate short courses in evidence based medicine in Germany.

Participants: The instrument was validated with experts in evidence based medicine, postgraduate doctors, and medical students. The effect of courses was assessed by postgraduate doctors from medical and surgical backgrounds.

Intervention: Intensive 3 day courses in evidence based medicine delivered through tutor facilitated small groups.

Main outcome measure: Increase in knowledge and skills.

Results: The questionnaire distinguished reliably between groups with different expertise in evidence based medicine. Experts attained a threefold higher average score than students. Postgraduates who had not attended a course performed better than students but significantly worse than experts. Knowledge and skills in evidence based medicine increased after the course by 57% (mean score before course 6.3 (SD 2.9) v 9.9 (SD 2.8), P<0.001). No difference was found among experts or students in absence of an intervention.

Conclusions: The instrument reliably assessed knowledge and skills in evidence based medicine. An intensive 3 day course in evidence based medicine led to a significant increase in knowledge and skills.

What is already known on this topic

Numerous observational studies have investigated the impact of teaching evidence based medicine to healthcare professionals, with conflicting results

Most of the studies were of poor methodological quality

What this study adds

An instrument assessing basic knowledge and skills required for practising evidence based medicine was developed and validated

An intensive 3 day course on evidence based medicine for doctors from various backgrounds and training level led to a clinically meaningful improvement of knowledge and skills

Introduction

It is often assumed that training health professionals in evidence based medicine reduces unacceptable variation in clinical practice and leads to improved patient outcomes. This will only be true if the training improves knowledge and skills and that these in turn are translated into improved clinical decision making.

Recent reviews focusing mainly on teaching critical appraisal have cast doubt on the effectiveness of training in evidence based medicine.15 Despite the general impression that some benefit might result from such training, most studies were poorly designed and the conclusions tentative. A recently published, well designed trial showed effectiveness and durability of teaching evidence based medicine to residents, but the conclusions were weakened as the instruments used to measure knowledge and skills had not been validated.6

We aimed to develop and validate an instrument to assess changes in knowledge and skills of participants on a course in evidence based medicine and to investigate whether short courses in evidence based medicine lead to a significant increase in knowledge and skills.

Methods

Our study comprised three stages: development of the instrument, validation of the instrument, and before and after assessment of the effect of a short course in evidence based medicine. The instrument was developed by five experienced teachers in evidence based medicine (N Donner-Banzhoff, LF, H-W Hense, RK, and K Weyscheider).

Participants

The instrument was validated by administering it to a group of experts in evidence based medicine (tutors with formal methodological training or graduates from a training workshop for tutors in evidence based medicine) and controls (third year medical students with no previous exposure to evidence based medicine). We then administered the instrument to participants on the evidence based medicine course in Berlin (course participants) with little exposure to evidence based medicine. We included three cohorts: 82 students attending the course in 1999 (course A), 50 students attending the course in 2000 (course B), and 71 students attending the course in 2001 (course C).

Development and validation of instrument

We aimed to develop an instrument that measures doctors' basic knowledge about interpreting evidence from healthcare research, skills to relate a clinical problem to a clinical question and the best design to answer it, and the ability to use quantitative information from published research to solve specific patient problems. The questions were built around typical clinical scenarios and linked to published research studies. The instrument was designed to measure deep learning (ability to apply concepts in new situations) rather than superficial learning (ability to reproduce facts). The final instrument consisted of two sets of 15 test questions with similar content (see bmj.com).

We assessed equivalence of the two sets and their reliability. 7 8 We considered a Cronbach's α greater than 0.7 as satisfactory. 9 10 We assessed the instrument's ability to discriminate between groups with different levels of knowledge by comparing the three groups with varying expertise: experts versus course participants (before test) versus controls (analysis of variance with Scheffé's method for post hoc comparisons).

Educational effect

Educational intervention

The 3 day course is based on the model developed at McMaster University, Canada11; a curriculum has been published separately.12 The course introduces motivated doctors with little prior knowledge of evidence based medicine to its principles (for example, identification of problems, formulation of questions, critical appraisal, consideration of clinical decision options) and promotes the appropriate use of appraised evidence, especially quantitative estimates of risk, benefit, and harm.

Administration of Berlin questionnaire—Participants received a questionnaire within 4 weeks of the course (before test) and another on the last day of the course (after test). The sequence of the test sets was reversed year on year. The participants were explicitly informed about the experimental character of the test, that participation was voluntary, and that results would not be disclosed to them. Tutors were asked not to modify their sessions with a view to coaching for the test.

Analysis of effect—The main outcome measure was change in mean score after the intervention (absolute score difference). We also measured relative change in score (unadjusted) and relative change in score (adjusted for differences in score before the course), calculated as gain achieved or maximal achievable gain.13 Correct answers scored 1 point, wrong answers 0 points. We compared before and after scores with a paired t test. We also conducted a sensitivity analysis with an unpaired ttest comparing the scores of participants who completed only one set to those who completed both sets. We considered as significant a P value 0.05.

Results

In total, 266 people took part in our study: 43 experts in evidence based medicine, 20 controls, and 203 participants of one of three courses in evidence based medicine. Twelve per cent (n=25) of the last group had some exposure to evidence based medicine before the course. Overall, 161 participants (61%) returned both sets of the questionnaire (see bmj.com). The main reasons for partial completion were failure to submit the questionnaire before the course (n=55), failure to participate in the course (n=8), and failure of identification (n=38).

Validation and discrimination

Course participants scored moderately poorly on the questionnaire administered before the test (mean score per question 0.42 (0.19)), whereas experts scored well (0.81 (0.29)) and controls poorly (0.29 (0.43)). The two sets of questionnaires were psychometrically equivalent (intraclass correlation coefficient for students and experts 0.96 (95% confidence interval 0.92 to 0.98, P<0.001)). Cronbach's α was 0.75 for set 1 and 0.82 for set 2.

The mean score of controls (4.2 (2.2)), course participants (6.3 (2.9)), and experts (11.9 (1.6)) were significantly different (analysis of variance, P<0.001; all comparisons between groups, P<0.01), whereas the scores of course participants in all three courses before the course were comparable (course A, 5.8 (2.8); B, 6.9 (2.8); and C, 6.6 (3.01); analysis of variance, P>0.5). The instrument distinguished reliably between groups with different expertise in evidence based medicine; groups with comparable knowledge performed consistently (fig 1).

Fig 1
Fig 1

Assessment of discriminative ability using Berlin questionnaire

Gain in knowledge and skills

Participation in the course was associated with a mean improvement of 3.6 out of 15 questions answered correctly (P<0.001), a significant increase in knowledge and skills (fig 2). This result was similar across all three courses, but the scores of the course participants (9.9 (2.4)) remained significantly below those achieved by the experts (11.9 (1.6); P<0.001). The crude relative increase in scores across all three courses was 57%. When adjusted for the individual potential for improvement, the improvement rate was 36% (46%). Sensitivity analysis did not detect a significant difference between partial responders (one set returned) and complete responders (both sets returned).

Fig 2
Fig 2

Comparison of before and after test scores of participants of consecutive short courses in evidence based medicine (paired analysis)

Discussion

Objective evaluation of training in evidence based medicine is difficult but essential, because self perception of ability in evidence based medicine correlates poorly with objective assessment of knowledge and skills.14 Most studies have lacked appropriate instruments.1517 Recent reviews of critical appraisal programmes showed only non-significant effects. 135 By using a validated questionnaire that reliably distinguishes between different competence levels, we found a 57% crude increase in knowledge and skills after short courses in evidence based medicine, a gain that is likely to be educationally significant.

Not all skills in evidence based medicine (for example, formulation of question, competencies in searching) taught in the course were captured by the instrument. Instead, the instrument was concentrated more on the handling of research information. Some claim that critical appraisal is the least important step for practising evidence based medicine, by referring to increasingly available resources that have already been appraised.1821 But even this “preprocessed” information is hard to apply unless the practitioner is competent in interpreting commonly used quantitative measures of risk and benefit.22

Evaluation of educational interventions concerns at least four dimensions: satisfaction of participants, learning (knowledge and skills), behavioural change (transfer of knowledge and skills to workplace), and outcomes (impact on patients).23 Our instrument assessed short term learning, but our study was not designed to measure the long term effect on knowledge or even short term behavioural change. Although improved skills are surely conditions for change in behaviour to occur, more research is needed on the impact on clinical behaviour of courses in evidence based medicine. 1 3 4

The intervention (a short training course in evidence based medicine) encompasses numerous components. Self selection of motivated doctors, active learning techniques, relevance to clinical practice, and intensity of the programme (participant to tutor ratio of 4:1) are likely to be important factors contributing to the learning effect. We aimed to investigate whether an effect of teaching evidence based medicine can be shown. We found a substantial effect, but our results cannot distinguish the separate contribution of each component. Furthermore, factors other than the course could be partially responsible for the observed effect4: for example, the inability to blind for intervention and assessment could have led to improvement due to awareness of being evaluated (Hawthorne effect), studying at home in advance of the course, or an impact of the study on tutors' behaviour. To reduce such an impact we made enrolment to the test voluntary, explained its experimental nature, and denied feedback (even correct answers). All but two of the tutors were not involved in the development of the test and conduct of the study. The questionnaires were administered only to participants and retrieved after completion.

Our results remain valid even if learning is enhanced by the inclusion of a formal assessment of knowledge and skills before and after the course. Indeed, now that a valid instrument is available, assessment of participants may become routine in courses in evidence based medicine as part of quality assurance.

Further research

Our study contributes to the validation of intensive, problem based curriculums in training in evidence based medicine. Further research should distinguish the individual components of the courses that determine their effectiveness and assess the impact on patient outcomes. Expansion of the question sets and validation of the Berlin questionnaire in different languages, professional groups, and cultural settings will enable the generalisability of our findings to be tested in other settings, as well as allowing comparisons between countries and the evaluation of different teaching methods.

Acknowledgments

We thank N Donner-Banzhoff (Marburg), H-W Hense (Munster), and K Wegscheider (Berlin) for their support for development and critical discussion of the questions, the Kaiserin-Friedrich Foundation (J Hammerstein, Ch Schröter), the Berlin Chamber of Physicians (G Jonitz), and the University Hospital Charité (in particular the students from the computer pool) for organising the workshops, R Kersten, J Meyerrose, and B Meyerrose for their assistance in collecting and analysing the data, and the participants and tutors. Course organisers who are interested in participating in the ongoing international study on teaching evidence based medicine should contact RK.

Contributors: LF and RK conceived and designed the study. LF, RK, TG, and YF-Y analysed and interpreted the data. RK and TG wrote the drafts of the paper. LF, H-HN, and YF-Y critically revised the paper. RK and LF will act as guarantors for the paper.

Footnotes

  • Funding RK is supported by an academic career award for women from the senate of Berlin

  • Competing interests None declared.

    Full details of the instrument and a table showing completion of questionnaire appear on bmj.com

    Embedded Image

References