Fundamentals of IPECP Measurement

Webinar 1: Fundamentals of IPECP Measurement I
Knowing What You Really Want to Measure and How to Select the Right Tool
July 27, 2017 1PM CT
Webinars Fundamentals of IPECP Measurement
Webinar 1: Fundamentals of IPECP Measurement I
Knowing What You Really Want to Measure and How to Select the Right Tool
July 27, 2017 1PM CT

This webinar is designed for practitioners (educators, clinicians, administrators) who are relatively new to the tasks of assessing individual trainees in IPECP competencies, assessing IPCP team performance, or evaluating educational programs designed to improve IPECP. It serves as a companion to the NC’s primer, “Assessment and Evaluation in Interprofessional Practice and Education: What Should I Consider When Selecting a Measurement Tool?” by Connie C. Schmitz and Michael Cullen.  The primer lays out three criteria involved with tool selection: relevance, validity, and feasibility.  This webinar focuses on the first criterion, relevance, but in the context of assessment planning.  The goals of this webinar are to help you: (1) recognize the importance of assessment planning as a precursor to instrument selection, and (2) know what to specifically look for in determining an instrument’s relevance for your needs.

Register Now

Learning Outcomes:

By the end of this 1-hour webinar (35-minute presentation, followed by 25 minutes of Q+A), you will be able to:

  1. Access and use the resources of the NC Assessment and Evaluation website

  2. Differentiate between “measurement,” “assessment,” “evaluation,” and “research”

  3. Identify assessment planning decisions that need to be made prior to selecting a tool

  4. Explain how to assess the relevance of a given tool for your needs

Pre-reading of the following material is encouraged:

  • Schmitz, C.C. & Cullen, M.J. (2015): Evaluating Interprofessional Education and Collaborative Practice: What Should I Consider When Selecting a Measurement Tool? especially pages 1-3 and 13-14. Minneapolis, MN: University of Minnesota, National Center for Interprofessional Practice and Education. Online access.

  • Archibald, D., Trumpower, D., & MacDonald, C. J. Validation of the interprofessional collaborative competency attainment survey (ICCAS). Journal of Interprofessional Care, 2014; 28(6): 553-8. Online access.

  • Russell, T., et al. (2016). Assessment Planning Worksheet from Practical Guide 3: Steps for Developing an Assessment Plan. Minneapolis, MN: University of Minnesota, National Center for Interprofessional Practice and Education. Online access.


Webinar 2: Fundamentals of IPECP Measurement II
Sources of Validity, Measurement Error, and Interpreting Your Results
August 3, 2017 1PM CT

This webinar is designed for practitioners (educators, clinicians, administrators) who are responsible for planning assessment and evaluation studies in IPECP, and who wish to become better consumers of existing measurement tools and more discerning readers of the research literature.  Our main goals are to help you become (1) more skilled in, and (2) more confident when appraising the validity evidence for particular measurement tools.  The webinar serves as a companion to the NC’s primer, “Assessment and Evaluation in Interprofessional Practice and Education: What Should I Consider When Selecting a Measurement Tool?” by Connie C. Schmitz and Michael Cullen.  The primer lays out three criteria involved with tool selection: relevance, validity, and feasibility.  In this interactive webinar, we focus on validity.  

First, we review important foundation concepts in the measurement field, such as variance, measurement error, reliability, and related statistics.  We discuss the importance of framing the “validity argument,” and illustrate through examples types of validity evidence that may be gathered to test those arguments.  We explain some of the more common validity statistics (such as correlation coefficients, factor loading, and standardized effect sizes) reported in study findings, and give you rules of thumb for interpreting these statistics.  Having a grasp of these statistics—even at the 10,000 foot level—is helpful not only in selecting quality instruments, but in understanding what types of conclusions you can make about your own assessment or evaluation data.  

Register Now

Learning Outcomes:

By the end of this 90-minute webinar (didactic presentation interspersed with Q+A), you will be able to:

  1. Define reliability and validity

  2. Explain why validity is not inherent to an instrument

  3. Differentiate between characteristics of a “good” tool and validity evidence

  4. Generate 2-3 validity claims for an instrument you are currently using or considering

  5. Recognize how you as a user can influence the validity of data collected by your instrument

  6. Apply rules of thumb when interpret some common reliability and validity statistics

  7. Decipher validity results in a paper testing the validity of a well-known instrument

Pre-reading of the following material is encouraged:

  • Schmitz, C.C. & Cullen, M.J. (2015): Evaluating Interprofessional Education and Collaborative Practice: What Should I Consider When Selecting a Measurement Tool? especially pages 5-12; 15-18; 25-31. Minneapolis, MN: University of Minnesota, National Center for Interprofessional Practice and Education. Online.

  • Archibald, D., Trumpower, D., & MacDonald, C. J. Validation of the interprofessional collaborative competency attainment survey (ICCAS). Journal of Interprofessional Care, 2014; 28(6): 553-8. Online access.

  • Handout: Measurement Tools: What Should I Look For?