Interprofessional Professionalism Assessment (IPA)

Connie C Schmitz's picture
Submitted by Connie C Schmitz on Dec 31, 2018 - 4:23pm CST

Instrument
Authors: 
Frost J.S., Hammer D.P., Nunez L.M., Adams J.L., Chesluk B., Grus C. Harrison N., McGuinn K., Mortensen L., Nishimoto J.H., Palatta A., Richmond M., Ross E.J., Tegzes, J., Ruffin A.L. & Bentley J.P.
Overview: 

The Interprofessional Professionalism Assessment (IPA) instrument was designed to measure interprofessional professionalism (IPP), which is defined as the “Consistent demonstration of core values evidenced by professionals working together, aspiring to, and wisely applying principles of altruism and caring, excellence, ethics, respect, communication, and accountability to achieve optimal health and wellness in individuals and communities.” The IPA was created over a 9-year period through extensive development and pilot testing by the Interprofessional Professionalism Collaborative (IPC), which is a national organization with representatives from 12 entry-level health professions and the National Board of Medical Examiners.  The IPA instrument is a 26-item observational rating tool used by faculty / preceptors to assess learners’ professionalism when working with members of other health professions.  It can be completed at the end of a practice experience (e.g., rotation) in environments where interprofessional, collaborative care of patients is required.  The psychometric properties of the IPA were tested with preceptors from 10 different health professions in seeking to support its generalizability.  Psychometric results demonstrate aspects of the IPA’s reliability and validity and its use across multiple health professions and in various practice sites.

Link to Resources
Descriptive Elements
Who is Being Assessed or Evaluated?: 
Individuals
Instrument Type: 
Observer-based (e.g., rubric, rating tool, 360 degree feedback)
Source of Data: 
Health care providers, staff
Notes for Data Sources: 

The IPA can be completed by faculty / preceptors from the same or different profession than the learners.  Additionally, the instrument could be completed by learners as a form of self-assessment.  In the validation study, the tool was administered to preceptor / learner pairs, but only preceptor data were used for the psychometric analysis.

Instrument Content: 
Behaviors / skills
Notes for Content: 

The instrument contains 26 items which are organized in six domains:

  1. Communication
  2. Respect
  3. Altruism and Caring
  4. Excellence
  5. Ethics
  6. Accountability
Instrument Length: 

The IPA is available as a 5-page, paper and pencil tool.  It contains a brief introduction with definitions of the overall construct being measured, as well as definitions of the six domains.  There are 28 items in total, 26 of which measure a specific behavior, and two of which are qualitative (open-ended).

Item Format: 
The 26 behavioral items are constructed on a 5-point, Likert-type scale with the following anchors: 1 = "strongly disagree," 2 = "disagree," 3 = "neutral," 4 = "agree," and 5 = "strongly agree." For each behavior there is also the response option, "no opportunity to observe." The two qualitative items provide space for raters to comment on the "overall strengths related to interprofessional professionalism" and "areas for improvement related to interprofessional professionalism."
Administration: 
The IPA is a five-page, paper and pencil tool that could be programmed into an electronic platform that administers, collects, and summarizes the data as an online survey (e.g., Qualtrics). The instrument is to be administered by observers at the end of a practice experience. No other specific instructions.
Scoring: 
No specific instructions given.
Language: 
English
Norms: 
None
Access: 
Open access (available on this website)
Notes on Access: 

The Interprofessional Professionalism Collaborative (IPC) also has a website from which a PDF of the IPA instrument can be downloaded.  Authors encourage users who access the IPA to voluntarily complete a survey for future follow-up and research.  The IPC website also has a toolkit which provides training videos for users (e.g., students, faculty, preceptors) that would be beneficial for those seeking to use and apply the IPA in their practices and programs.  The IPC website can be accessed here: http://www.interprofessionalprofessionalism.org/assessment.html

Psychometric Elements: Evidence of Validity
Content: 
The development of the IPA was extensive and well executed from 2006-2015 over three phases: (1) Construct development and generation of observable behaviors and response scales; (2) Content expert review and cognitive interviews with typical raters; and (3) a two-year pilot study. The process began with literature review, construct definition, and the organization of 200 potential behaviors into categories by the Interprofessional Professionalism Collaborative (IPC). The number of behaviors was reduced to 43 after the IPC applied explicit inclusion criteria (e.g., observable in practice, applicable across the professions, not redundant). Members of the IPC then made national and international presentations about the tool, documented oral feedback from audience members, and collected follow-up online survey feedback from 205 individuals representing 11 professions . This feedback led to the formatting of a 39-item instrument, which was then reviewed by a panel of 23 expert reviewers from the U.S. and Canada. The panel responded to structured survey questions about the tool’s content, fit of 39 behavioral items within and across six domains, overall organization, format, and length.
Response Process: 
Twenty-four preceptors, two from each of the 12 IPC member health professions representing “typical” preceptors that would use the tool, were involved in the cognitive interviews. Based on their feedback, the IPA was reduced to 26 items. It was this version of the IPA which was used in a large, multi-institution and multi-profession pilot study. A total of 67 academic institutions were invited to participate in the pilot; 30 agreed to do so (44.8%). Using a key contact method, nearly 3,000 preceptors (estimated) were invited into the study; 376 agreed and 233 provided data (62% of enrolled, 7.9% of potential population).
Internal Structure: 
Exploratory factor analysis (EFA) was conducted on preceptors’ ratings of their learners assuming ordered categorical factor indicators. To determine the number of factors to retain, eigenvalues and measures of fit were examined [i.e., the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR)]. Prior to factor analysis, the extent of missing data for each of the IPA items was examined (i.e., an item either left blank or the respondent recorded N/O – No opportunity to observe in this environment). Internal consistency reliability of the factors suggested by the EFA was calculated using coefficient alpha. The initial EFA using 21 items (excluding five items with extensive missing data) suggested retaining four factors. With eigenvalues of 12.670, 1.229, 0.888, and 0.787, the four factors together accounted for 86.5% of the variance in the set of variables, and the fit indices indicated good model fit (RMSEA = 0.064, 90% CI: 0.055 – 0.078; CFI = 0.991; SRMR = 0.027). The four factors loaded well on the following domains: Communication, Respect, Excellence, Altruisim and Caring. Internal reliability consistency coefficients were high (alpha >0.94) for each of the factors. Despite the psychometric results, and based on other considerations, the study authors decided to keep the 5 excluded items and the two other domains (Ethics and Accountability) in the final instrument.
Relation to Other Variables: 
During the pilot study, responding preceptors were also asked to complete two global items for each learner they evaluated: one was a global rating of the learner's interprofessional professionalism, and the other a global rating of the learner’s overall performance on the practice experience. These ratings were made using a 5-point Likert-type scale (1 = "poor," 5 = "excellent"). Given the results of the factor analysis, items within each domain were averaged to create subscale scores and factor scores were also estimated from the final factor model. These scores were all positively and significantly correlated with the two global performance items described above.
Consequential: 
Not discussed.
33