Performance Assessment Communication and Teamwork Tools Set (PACT)

National Center for Interprofessional Practice and Education's picture
Submitted by National Center... on Oct 24, 2016 - 11:35am CDT

Chiu, C.
Brock, D.
Abu-Rish, E.
Vorvick, L.
Wilson, S.
Hammer, D.
Schaad, D.
Blondon, K.
Zierler, B.

The PACT Tool set was designed by faculty and staff at the University of Washington (Center for Health Sciences, Interprofessional Education, Research, and Practice), as part of a Macy and Hearst Foundations grant to develop a simulation-based team training program for pre-licensure health professions students.  To develop the program, the authors chose the "Team Strategies and Tools to Enhance Performance and Patient Safety" (TeamSTEPPS®) model as a guiding framework.  The PACT contains 5 instruments: two are self-report, pre-post assessments; and three are observational rating tools developed for raters with different levels of experience.  All five tools contain items that reflect the five domains of Team STEPPS: Team structure, Leadership, Situation monitoring, Mutual support, and Communication.  The PACT tools are designed to provide assessment feedback for learners, and evaluation information for program faculty.  The UW website also contains a library of simulation scenarios, a debriefing guide, and many other useful materials to support implementation of the training program. A validity study of 306 medicine, nursing, pharmacy, and physician assistants demonstrated acceptable to very good overall inter-rater reliability for the observational tools.

Link to Resources
Descriptive Elements
Who is Being Assessed or Evaluated?: 
Instrument Type: 
Self-report (e.g., survey, questionnaire, self-rating)
Observer-based (e.g., rubric, rating tool, 360 degree feedback)
Notes for Type: 

The PACT Tool Set contains pre- and post-training assessment tools that are similar in structure and content.  These are self-report tools, providing information on individual students.  The set also contains three levels of observational rating tools that provide escalating levels of complexity and training support for the raters: Novice, Expert, and Video.  The assesment focus for the observational tools is the team, NOT the individual.

Source of Data: 
Health care trainees
Health care providers, staff
Notes for Data Sources: 

Healthcare students provide pre- and post-training assessment data and healthcare professionals provide observational ratings. In the validation study, team members consisted mainly of medicine, nursing, pharmacy, and physician assistant (PA) students.

Instrument Content: 
Satisfaction with IP training
Attitudes, values, beliefs regarding IPE, IPCP, professions
Reported perceptions, experiences of working relationships, teamwork
Behaviors / skills
Notes for Content: 

The three types of observational tools (i.e., novice, expert, and video) all rate teams on five domains:

  1. Team structure
  2. Leadership
  3. Situation monitoring
  4. Mutual support
  5. Communication

The expert and video observational tools also include questions on the date, time, and type of simulation observed. The novice observational tool includes questions on rater profession as well as the date, time, and type of simulation participated in.

The self-report tools (i.e., pre- and post- simulation training) include rating in 15 areas:

  1. Familiarity working and training with teams
  2. Interprofessional training satisfaction
  3. Benefits of training
  4. Learning and performance
  5. Learning environments 
  6. Skills
  7. Team structure
  8. Leadership
  9. Situation monitoring
  10. Mutual support
  11. Communication
  12. Interprofessional training experience
  13. Essential practice characteristics
  14. Understanding before and after training
  15. Expectations

The tool also contains questions on the sex, age, and prior healthcare experience of the student team member.

Instrument Length: 

Self-report pre-training: 68 items; no time length specified

Self-report post-training: 103 items; no time length specified

Novice observation: 5 ratings; no time length specified

Expert observation: 18 ratings; simulations were approximately 15 minutes long

Video observation: 36 ratings; simulations are viewed three times for a minimum of approximately 45 minutes


Item Format: 
Self-report: Most items have a 5-point likert-type scales ranging from very unfamiliar to very familiar or strongly disagree to strongly agree or never to frequently (some scale include a not applicable (NA) option). One set of items has three options: essential, not essential, don’t know. Finally, one item is free response. Novice observation: 5-point likert-type scale ranging from poor (0) to excellent (3) Expert and video observation: 4-point rating scale with options need improvement in most areas (0), need improvement in some areas (1), satisfactory (2), excellent (3), and not enough information to answer; 2-point rating scale with options present, absent, and not applicable; 3-point rating scale with options absent (0), isolated (1), consistent (2), and not applicable; 4 point likert-type scale ranging from poor (0) to excellent (3) with a not applicable option
The PACT was developed for students participating in team-based interprofessional training simulations. As described in the validation study, the simulations consisted of asthma, congestive heart failure (CHF), and supraventricular tachycardia (SVT) or three pediatric scenarios or three obstetric scenarios. A training day agenda is provided in the referenced article consisting of introductions, brief lecture on TeamSTEPPS, introduction to the novice observation tool, team building exercises, three simulations, and wrap-up. The video observational tool provides special instructions for use, including watching each team scenario three times with different intents.
For the observational tools, the quality (i.e., poor to excellent) and frequency (i.e., absent/present or absent/isolated/consistent) scores are multiplied for each behavior, except when not applicable. The average of the non-missing scores is then added to the average of the non-missing domain quality (i.e., need improvement in most areas to excellent) scores to produce a total composite score. No specific scoring instructions are provided for the self-report measures.
None described.
Open access (available on this website)
Notes on Access: 

Materials for faculty and curriculum development can be found on the UW website by clicking here.  Validity data was found in an unpublished doctoral dissertation.  Materials are copyrights: contact UW to confirm permission to use.

Psychometric Elements: Evidence of Validity
The content was based on the TeamSTEPPS framework. Item content for the observational tools was based on items from previously validated instruments. Experts in teamwork and assessment reviewed the tools and items for match with a TeamSTEPPS construct and coverage of TeamSTEPPS constructs. The expert panel identified communication as an underrepresented construct and items were developed to assess this construct.
Response Process: 
The expert observational tool was determined too long by raters and was reduced to the final 18-rating form. The video observational tool was determined too time consuming when individual behavior timestamps were required and this feature of the tool was dropped. Scale mid-points (i.e., option 3 on a 5 option scale) were eliminated and “not enough information” was added to better capture rater judgements.
Internal Structure: 
The overall inter-rater reliability of the novice, expert, and video observational tools was good (Intraclass Correlation Coefficient (ICC) = 0.85, 0.76, and 0.90, respectively). The inter-rater reliability for four of the five domains in the expert observational tool were acceptable (ICC = 0.44-0.66); the exception was the team structure domain (ICC = 0.21). The inter-rater reliability for all five domains in the video observational tool were acceptable (ICC = 0.54-0.84).
Relation to Other Variables: 
None described.
None described.