///2011 Abstract Details
2011 Abstract Details2018-05-01T17:54:20+00:00

Developing simulation-based assessment tools

Abstract Number: 40
Abstract Type: Original Research

Thomas M. Chalifoux M.D.1 ; Jonathan H. Waters M.D.2

Introduction

Simulation is playing an increasing role in resident education, both as a tool to teach and to assess competency. However, how competency should be defined and measured has not been established for the many clinical situations for which performance can, and should, be measured. Commonly, the judgment of one or two individuals determines curricular content. The Delphi method, a tool for expert panel-based forecasting and consensus building, has also been used to determine the content and scoring for simulation-based assessment.

Objective

To determine the variation in academic anesthesiologists’ judgment on the parameters used to assess the performance of residents in a simulated cesarean section.

Method

A list of 50 directly observable tasks performed by an anesthesiology resident during an elective cesarean section for an ASA 1 or 2 parturient under spinal anesthesia was developed after clinical observation by two investigators. Tasks associated with routine care and situations requiring additional care (post-spinal hypotension, uterine atony, and post-partum hemorrhage) were included. All 20 obstetrical anesthesiologists from a single, large (10,000 births per year), university-based tertiary care center answered the first round of a Delphi survey to determine both the content of, and scoring system for, an assessment tool. Each panelist rated each task on a Likert scale (1 = not important, 5 = very important), with respect to its importance in determining the competency of an anesthesiology resident. A web-based survey service (Qualtrics™, Provo, UT, USA) was used to conduct a quasi-anonymous survey.

Results

The scores assigned to a single task often varied. For the 50 tasks, the interquartile range varied from 0 to 2.25, with 82% ≥ 1 and 28% ≥ 2. (See graph for a data sample.) When queried, no panelist suggested any other tasks be added to the list.

Conclusions

Significant variation exists in the relative importance anesthesiologists assigned to the tasks used to assess performance in this construct. The judgment of a few individuals may not suffice, and a sufficient sample of anesthesiologists, using a technique such as the Delphi method, should be used to develop assessment tools. Criteria for determining assessment parameters should be determined before simulation-based assessment can be incorporated into curricula.



SOAP 2011