Who is Being Assessed or Evaluated?:
Instrument Type:
Self-report (e.g., survey, questionnaire, self-rating)
Observer-based (e.g., rubric, rating tool, 360 degree feedback)
Notes for Type:
The same items were rated by observers and team members (independently scored). Observers made ratings while viewing recorded simulations; team members made ratings immediately following the simulation. Four standardized emergency scenarios (i.e., two airway and two cardiovascular) made use of a METI patient simulator in a high fidelity environment.
Source of Data:
Health care providers, staff
Notes for Data Sources:
The observers were anaesthetists and/or critical care specialists. The team member ratings were completed by doctors and nurses in intensive care unit teams.
Notes for Content:
The items on the tool reflect three major factors:
- Leadership and Team Coordination
- Mutual Performance Monitoring
- Verbalising Situational Information
Two additional items reflect overal behavioral performance and overall performance when both technical and non-technical skills are considered.
Item Format:
23 7-point likert-type items ranging from Never/rarely (1) to Consistently (7); 2 7-point likert-type items ranging from Poor (1) to Excellent (7).
Administration:
Three observers (i.e., anesthetists, intensive care specialists, or both) were trained using exemplar videos and consensus building discussions. The observers then independently rated all 4 recorded simulations for each of the 40 teams. The team members were provided the instrument without training or prior exposure.
Scoring:
Team members: Mean scores are calculated for the four team members on each of the three factors and overall performance.
Observers: Mean scores are calculated for the three observers on each of the the three factors and overall performance.
Access:
Open access (available on this website)
Notes on Access:
Contact author to confirm access.
Start the Conversation
Every registered user can comment on website content.
Please login or register to comment