When Less is More: Validating a Brief Scale to Rate Interprofessional Team Competencies

Desiree Lie's picture
Submitted by Desiree Lie on May 8, 2017 - 3:19pm CDT

Resource Type: 
Journal Article

There is a need for validated and easy-to-apply behavior-based tools for assessing interprofessional team competencies in clinical settings. The 7-item observer-based Modified McMaster-Ottawa scale was developed for the Team Objective Structured Clinical Encounter (TOSCE) to assess individual and team performance in interprofessional patient encounters. We aimed to improve scale usability for clinical settings by reducing item number while maintaining generalizability; and to explore the minimum number of observed cases required to achieve modest generalizability for giving feedback.

We administered a two-station TOSCE in April 2016 to 63 students split into 16 newly-formed teams, each consisting of four professions. The stations were of similar difficulty. We trained sixteen faculty to rate two teams each. We examined individual and team performance scores using generalizability (G) theory and principal component analysis (PCA).

The 7-item scale shows modest generalizability (.75) with individual scores. PCA revealed multicollinearity and singularity among scale items and we identified three potential items for removal. Reducing items for individual scores from 7 to 4 (measuring Collaboration, Roles, Patient/Family-centeredness and Conflict Management) changed scale generalizability from .75 to .73. Performance assessment with two cases is associated with reasonable generalizability (.73). Students in newly-formed interprofessional teams show a learning curve after one patient encounter. Team scores from a two-station TOSCE demonstrate low generalizability whether the scale consisted of 4 (.53) or 7 items (.5)

The 4-item Modified McMaster-Ottawa scale for assessing individual performance in interprofessional teams retains the generalizability and validity of the 7-item scale. Observation of students in teams interacting with two different patients provides reasonably reliable ratings for giving feedback. The 4-item scale has potential for assessing individual student skills and the impact of IPE curricula in clinical practice settings.

Author(s): 
Lie, Desiree A
Richter-Lagha, Regina
Forest, Christopher P
Walsh, Anne
Lohenry, Kevin
8