Metric for the Observation of Decision-Making (MODe)

National Center for Interprofessional Practice and Education's picture
Submitted by National Center... on Sep 6, 2016 - 11:12am CDT

Instrument
Authors: 
Lamb, B.W.
Wong, H.W.L.
Vincent, C.
Green, J.S.A.
Sevdalis, N.
Overview: 

This tool was designed to assess collaborative processes taking place within multidisciplinary team (MDT) case conferences, as members discuss the diagnosis and treatment of cancer patients.  Team members include surgeons, oncologists, radiologists, pathologists, and clinical nurse specialists.  Specifically, the MODe measures how thoroughly patient information is presented; how effectively the chair runs the meeting; and the extent to which the various specialists contribute productively to decision-making.  In the validation study, the meetings were observed by clinical or non-clinical observers, who scored teamwork behaviors.  The validity study reported good (0.70+) interobserver consistency for some measures, but moderate to low consistency for others.  Results of the MODe are meant to be used by teams for self-reflection, monitoring, and improvement.

Link to Resources
Descriptive Elements
Who is Being Assessed or Evaluated?: 
Teams
Instrument Type: 
Observer-based (e.g., rubric, rating tool, 360 degree feedback)
Notes for Type: 

This is an on site (in situ) observational tool for care conferences.

Source of Data: 
Health care providers, staff
Instrument Content: 
Behaviors / skills
Notes for Content: 

Descriptive Information: The patient’s point in treatment is recorded as “pre-treatment,” “post-treatment,” or “recurrence/surveillance.” The outcome of the team meeting is recorded as “decision made,” “decision deferred,” or “no decision reached”

Location and Attendance Information: hospital site and presence of particular team members in the MDT case conference, by specialty Quality of Information Presented and of Member Contributions

  1. Case history information
  2. Radiological information
  3. Pathological information
  4. MDT Chair leadership
  5. Surgeon contributions
  6. Oncologist contributions
  7. Radiologists
  8. Histopathologists
  9. Clinical Nurse Specialists

Other members are listed but not described.

Instrument Length: 

14 behavioral judgments per case; length of time to complete varies with the length of the case

Item Format: 
Combination of checklist and qualitative ratings for nine key items. The ratings are made on a 5-point Likert-type scale with anchors at 1, 3, and 5 indicating poor, moderate, and high performance.
Administration: 
Observers sit in on multidisciplinary team care conferences and make ratings on a score sheet for each case presented in that meeting. High volume meetings (e.g., 25 cases per hour) create difficulty for accurately rating behaviors.
Scoring: 
Individuals do not receive a total or composite score; neither do teams.
Language: 
English
Norms: 
None described
Access: 
Open access (available on this website)
Notes on Access: 

Contact author to confirm permission to us.

Psychometric Elements: Evidence of Validity
Content: 
Literature on team performance in multidisciplinary teams on cancer treatment was reviewed. Based on input-process-output model. Based on existing, validated tool.
Response Process: 
The tool was piloted using a sample of three teams in three hospitals in England. Data were collected from a total of 112 cases. To ascertain the agreement between two observers, the researchers compared their average scores for 9 quality rating scale items across all cases using the Mann-Whitney test. They also calculated the intraclass correlation (ICC) for each item using their combined data. There was a statistically significant difference between average scores on only one item: “case history information.” ICCs varied widely with item, ranging from a low of 0.31 for “pathological information” to a high of 0.87 for “clinical nurse specialist contributions.” (Reliability coefficients in the 0.70 range and above are acceptable for low stakes assessment.) Analyzing the data in terms of "learning curves" by cohorts suggested the raters became more consistent over time.
Internal Structure: 
See ICC data above.
Relation to Other Variables: 
None described.
Consequential: 
None described.
9