The University of Auckland Behavioural Rating Scale (UA-BRS)

National Center for Interprofessional Practice and Education's picture
Submitted by National Center... on Sep 6, 2016 - 11:12am CDT

Instrument
Authors: 
Weller, J.
Frengley, R.
Torrie, J.
Shulruf, B.
Jolly, B.
Hopley, L.
Hendersdon, K.
Dzendrowskyj, P.
Yee, B.
Paul, A.
Overview: 

This tool was designed to assess teamwork behaviors exhibited by critical care teams during simulated emergencies.   Specifically, leadership and team coordination, mutual performance monitoring, and verbalising situational information are rated in 23 items. The tool can be completed by team members (group self-assessment) and/or by observers. The 2011 validation study was conducted on 160 individuals in 40 critical care teams engaged in 4 scenarios each. Results helped establish the factor structure of items, reliability and generalizability of scores.  In the 2013 study, ratings submitted by individual team members correlated significantly with those of external observers. The results are meant to structure reflection on teamwork (for the team member scores) and to evaluate interventions (for the observation scores).

Link to Resources
Descriptive Elements
Who is Being Assessed or Evaluated?: 
Individuals
Teams
Instrument Type: 
Self-report (e.g., survey, questionnaire, self-rating)
Observer-based (e.g., rubric, rating tool, 360 degree feedback)
Notes for Type: 

The same items were rated by observers and team members (independently scored). Observers made ratings while viewing recorded simulations; team members made ratings immediately following the simulation. Four standardized emergency scenarios (i.e., two airway and two cardiovascular) made use of a METI patient simulator in a high fidelity environment. 

Source of Data: 
Health care providers, staff
Notes for Data Sources: 

The observers were anaesthetists and/or critical care specialists.  The team member ratings were completed by doctors and nurses in intensive care unit teams.

Instrument Content: 
Behaviors / skills
Notes for Content: 

The items on the tool reflect three major factors:

  1. Leadership and Team Coordination
  2. Mutual Performance Monitoring 
  3. Verbalising Situational Information

Two additional items reflect overal behavioral performance and overall performance when both technical and non-technical skills are considered.

Instrument Length: 

25 items

Item Format: 
23 7-point likert-type items ranging from Never/rarely (1) to Consistently (7); 2 7-point likert-type items ranging from Poor (1) to Excellent (7).
Administration: 
Three observers (i.e., anesthetists, intensive care specialists, or both) were trained using exemplar videos and consensus building discussions. The observers then independently rated all 4 recorded simulations for each of the 40 teams. The team members were provided the instrument without training or prior exposure.
Scoring: 
Team members: Mean scores are calculated for the four team members on each of the three factors and overall performance. Observers: Mean scores are calculated for the three observers on each of the the three factors and overall performance.
Language: 
English
Norms: 
None described.
Access: 
Open access (available on this website)
Notes on Access: 

Contact author to confirm access.

Psychometric Elements: Evidence of Validity
Content: 
Instrument content is based on a model of teamwork and extant literature on teamwork. Six clinicians (i.e., critical care specialists and anesthetists) and two psychologists reviewed the content for comprehensiveness, comprehensibility, and observability.
Response Process: 
Observers were interviewed to assess the feasibility of the observational assessment, and 8 items were deemed difficult to rate or infrequently rated (see table in Weller et al., 2011).
Internal Structure: 
High internal reliability was found for all three factors (i.e., alpha = 0.89, 0.91, and 0.92). However, three items did not load on any factor (i.e., loading less than 0.30 on all factors). Furthermore, only 17 items loaded on consistent factors between the team member and observer forms of the instrument (all further analysis was conducted on these 17 items; see table in Weller et al., 2013 for list of items). Confirmatory factor analysis performed on both team and observer ratings showed similar positive results for factor structure.
Relation to Other Variables: 
Team performance significantly improved over time (statistics not given). Teams led by specialists performed significantly better at simulations than teams led by trainees (p < .0001). Team member and observer scores were significantly correlated for overall performance, leadership and team coordination, and verbalizing situational information (p < .001), but not mutual performance monitoring (r = 0.08). Team member scores were higher than observer ratings for all factors, however.
Consequential: 
None described.
3