Team Observed Structured Clinical Encounter (TOSCE)

National Center for Interprofessional Practice and Education's picture
Submitted by National Center... on Sep 6, 2016 - 11:12am CDT

Instrument
Authors: 
Lie, D.
May, W.
Richter-Lagha, R.
Forest, C.
Banzali, Y.
Lohenry, K.
Overview: 

This tool was designed to assess the teamwork behaviors of interprofessional student teams as demonstrated in a clinical setting. Specifically, the tool measures communication, collaboration roles and responsibilities, collaborative patient-family-centered approach, conflict management/resolution, and team functioning. Two versions of the 7-item tool are provided to assess individual performance separately from team performance, and they may be used concurrently. In the validation study, the teams consisted of medicine, occupational therapy, and pharmacy students, as well as physician assistants.  The faculty raters represented a range of professions. The purpose of the study reported here was to determine whether faculty, after one hour of rater training based on videotapes, could accurately score student teams who had been trained to perform at different skill levels ("above expectations," "at expectations," and "below expectations").  Ratings were based on student performance as demonstrated in a single simulation station involving a stroke patient.  The findings indicated that faculty rater accuracy varied and tended to be lenient toward individual team members.  When applied to future teams, results from the TOSCE can be provided to the individual or team as formative feedback.

Link to Resources
Descriptive Elements
Who is Being Assessed or Evaluated?: 
Individuals
Teams
Instrument Type: 
Observer-based (e.g., rubric, rating tool, 360 degree feedback)
Notes for Type: 

The simulation involved an inpatient visit with a person who had suffered a stroke.  In the validation study, the standardized patient encounters were viewed from different rooms via remote cameras. The authors suggest that this tool can also be used as an onsite (in situ) observational tool. 

Source of Data: 
Health care providers, staff
Notes for Data Sources: 

16 volunteer faculty members representing dentistry, medicine, occupational therapy, pharmacy, and physician assistant professions with experience teaching and assessing students, and no prior experience with IPE assessment.

Instrument Content: 
Behaviors / skills
Notes for Content: 

6 dimensions are rated:
1. Communication (with other team members/ team with patient)
2. Collaboration
3. Roles and responsibilities
4. Collaborative patient-family-centered approach
5. Conflict management/resolution
6. Team functioning
An overall global rating of the individual’s performance/team’s performance.

Instrument Length: 

7 ratings (6 dimensions and global rating) for each of the 4 team members plus the team as a whole resulting in 35 ratings; each rater had to observe a 35 minute teamwork encounter scenario, although time may vary with scenario.

Item Format: 
Ratings were made on a 3 point scale (i.e., below expected, at expected, and above expected). Descriptive behavioral anchors developed by the authors are provided.
Administration: 
Raters received 60 min of training immediately prior to TOSCE administration. Training included viewing video demonstrations representing three different levels of performance. Training was deemed to be completed when all 16 raters agreed on the performance level of students and teams shown in the videos. During the actual study, raters watched a team encounter of 4 students (trained actors who performed at varying levels of teamwork competencies) and made individual and team ratings.
Scoring: 
Scoring procedures were not described. The research study was designed to evaluate the accuracy with which raters used the TOSCE. Students portrayed a target performance level. Researchers assessed the accuracy of faculty judgments by comparing TOSCE ratings to the target performance level.
Language: 
English
Norms: 
None described.
Access: 
Open access (available on this website)
Notes on Access: 

Contact the author to confirm permission to use.

Psychometric Elements: Evidence of Validity
Content: 
The authors refer to two previous studies that investigate the face/content validity of the tool. The tool was based on a previously validated tool (e.g., the McMaster-Ottawa scale).
Response Process: 
Compared to one another, no faculty rater was more lenient or strict than another. Specifically, the calculations for a one-station TOSCE involving four faculty rating students on six competencies indicated a small percentage (nearly 4%) of variation in student scores were attributable to faculty rater (0.01). Raters were able to distinguish the lowest and highest levels of performance for both individuals and teams, although raters tended to operate under leniency biases compared to target performance level. In a post-TOSCE survey, 94% of the 16 raters felt they had sufficient time to score the TOSCE, and 81% felt the tool was useful for assessing individuals, and 81% felt it useful for assessing teams. They professed greater confidence in assessing teams than individuals, however.
Internal Structure: 
Student teams were trained to perform at three different skill levels (below expectations, at expectations, and above expectations). Researchers assessed the accuracy of faculty judgments by comparing TOSCE ratings to the target performance level. Faculty ability to accurately score target performance varied. The calculations for a four-team TOSCE, in which students are ‘nested’ within teams, showed substantial variability. Nearly 25% of the total variance in faculty accuracy scores was attributable to systematic differences between faculty raters. The accuracy of faculty raters may vary from student team to student team. A moderate percentage of variation in faculty accuracy was attributable to the interaction of faculty and team (0.005, or about 19%).
Relation to Other Variables: 
None described.
Consequential: 
None described.
9