Join now to get access to this content and more.
Become a SOAP member and have access to our benefits.
- 2020 SOAP Virtual Meeting Series Videos
- For Review: SOAP Consensus Statement on Neuraxial Procedures in Thrombocytopenic Parturients
- Sample Centers of Excellence Applications
- ASA Corner
- SOAP Policy and Procedure Manual (P&P Manual)
- SOAP Expert Opinions
- SOAP's Learning Modules
- 2019 Annual Meeting Lecture Videos
- December 2018 - SOAP Unofficial Guide to ASA Committees Webinar
- Submit a Position
- View Job Postings
- Previous Meeting Archives
- Previous Meeting Abstract Search
- CMS Guidelines
- Member Benefits
- Newsletter Clinical Articles
- ACOG Documents
- Search our Patient Safety Archive
- Ask SOAP a Question
- Global Health Opportunities
- And more…
Developing simulation-based assessment tools
Abstract Number: 40
Abstract Type: Original Research
Simulation is playing an increasing role in resident education, both as a tool to teach and to assess competency. However, how competency should be defined and measured has not been established for the many clinical situations for which performance can, and should, be measured. Commonly, the judgment of one or two individuals determines curricular content. The Delphi method, a tool for expert panel-based forecasting and consensus building, has also been used to determine the content and scoring for simulation-based assessment.
To determine the variation in academic anesthesiologists’ judgment on the parameters used to assess the performance of residents in a simulated cesarean section.
A list of 50 directly observable tasks performed by an anesthesiology resident during an elective cesarean section for an ASA 1 or 2 parturient under spinal anesthesia was developed after clinical observation by two investigators. Tasks associated with routine care and situations requiring additional care (post-spinal hypotension, uterine atony, and post-partum hemorrhage) were included. All 20 obstetrical anesthesiologists from a single, large (10,000 births per year), university-based tertiary care center answered the first round of a Delphi survey to determine both the content of, and scoring system for, an assessment tool. Each panelist rated each task on a Likert scale (1 = not important, 5 = very important), with respect to its importance in determining the competency of an anesthesiology resident. A web-based survey service (Qualtrics™, Provo, UT, USA) was used to conduct a quasi-anonymous survey.
The scores assigned to a single task often varied. For the 50 tasks, the interquartile range varied from 0 to 2.25, with 82% ≥ 1 and 28% ≥ 2. (See graph for a data sample.) When queried, no panelist suggested any other tasks be added to the list.
Significant variation exists in the relative importance anesthesiologists assigned to the tasks used to assess performance in this construct. The judgment of a few individuals may not suffice, and a sufficient sample of anesthesiologists, using a technique such as the Delphi method, should be used to develop assessment tools. Criteria for determining assessment parameters should be determined before simulation-based assessment can be incorporated into curricula.