National Center Webinar Fundamentals of IPECP Measurement - Webinar 2

National Center for Interprofessional Practice and Education's picture
Submitted by National Center... on Aug 4, 2017 - 12:52pm CDT

Resource Type: 

This webinar is designed for practitioners (educators, clinicians, administrators) who are responsible for planning assessment and evaluation studies in IPECP, and who wish to become better consumers of existing measurement tools and more discerning readers of the research literature.  Our main goals are to help you become (1) more skilled in, and (2) more confident when appraising the validity evidence for particular measurement tools.  The webinar serves as a companion to the NC’s primer, “Assessment and Evaluation in Interprofessional Practice and Education: What Should I Consider When Selecting a Measurement Tool?” by Connie C. Schmitz and Michael Cullen.  The primer lays out three criteria involved with tool selection: relevance, validity, and feasibility.  In this interactive webinar, we focus on validity. 

First, we review important foundation concepts in the measurement field, such as variance, measurement error, reliability, and related statistics.  We discuss the importance of framing the “validity argument,” and illustrate through examples types of validity evidence that may be gathered to test those arguments.  We explain some of the more common validity statistics (such as correlation coefficients, factor loading, and standardized effect sizes) reported in study findings, and give you rules of thumb for interpreting these statistics.  Having a grasp of these statistics—even at the 10,000 foot level—is helpful not only in selecting quality instruments, but in understanding what types of conclusions you can make about your own assessment or evaluation data. 


Learning Outcomes

By the end of this 90-minute webinar (didactic presentation interspersed with Q+A), you will be able to:

  1. Define reliability and validity
  2. Explain why validity is not inherent to an instrument
  3. Differentiate between characteristics of a “good” tool and validity evidence
  4. Generate 2-3 validity claims for an instrument you are currently using or considering
  5. Recognize how you as a user can influence the validity of data collected by your instrument
  6. Apply rules of thumb when interpreting some common reliability and validity statistics
  7. Decipher validity results in a paper testing the validity of a well-known instrument


Please download these handouts for your reference:

Article: Schmitz et al., The Interprofessional Collaborative Competency Attainment Survey (ICCAS): A replication validation study
Handout: Measurement Tools: What Should I Look For
Handout: Examples of Validity Evidence
Handout: Rules of Thumb for Interpreting Validity Data
Handout: Tips for Finding Validity Evidence
Handout: How Low Reliability Affects Scoring

Connie C. Schmitz, PhD
Barbara F. Brandt
National Center Publications