A new Mentor Evaluation Tool: Evidence of validity

Michi Yukawa, Conceptualization , Data curation , Formal analysis , Methodology , Project administration , Writing – original draft , Writing – review & editing , 1, 2, ¤a * Stuart A. Gansky, Conceptualization , Data curation , Formal analysis , Investigation , Methodology , Writing – review & editing , 3, ¤b Patricia O’Sullivan, Conceptualization , Formal analysis , Investigation , Methodology , Supervision , Writing – review & editing , 1, ¤b Arianne Teherani, Conceptualization , Formal analysis , Investigation , Methodology , Writing – review & editing , 1, ¤b and Mitchell D. Feldman, Conceptualization , Data curation , Formal analysis , Investigation , Methodology , Supervision , Writing – review & editing 1, ¤b

Michi Yukawa

1 Department of Medicine, University of California San Francisco, Division of Geriatrics, San Francisco, California, United States of America

2 San Francisco VA Medical Center, Department of Medicine, Geriatrics, Palliative and Extended Care Service, San Francisco, California, United States of America

Find articles by Michi Yukawa

Stuart A. Gansky

3 Department of Dentistry, University of California San Francisco, San Francisco, California, United States of America

Find articles by Stuart A. Gansky

Patricia O’Sullivan

1 Department of Medicine, University of California San Francisco, Division of Geriatrics, San Francisco, California, United States of America

Find articles by Patricia O’Sullivan

Arianne Teherani

1 Department of Medicine, University of California San Francisco, Division of Geriatrics, San Francisco, California, United States of America

Find articles by Arianne Teherani

Mitchell D. Feldman

1 Department of Medicine, University of California San Francisco, Division of Geriatrics, San Francisco, California, United States of America

Find articles by Mitchell D. Feldman Slavko Rogan, Editor

1 Department of Medicine, University of California San Francisco, Division of Geriatrics, San Francisco, California, United States of America

2 San Francisco VA Medical Center, Department of Medicine, Geriatrics, Palliative and Extended Care Service, San Francisco, California, United States of America

3 Department of Dentistry, University of California San Francisco, San Francisco, California, United States of America

Berner Fachhochschule, SWITZERLAND Competing Interests: The authors have declared that no competing interests exist.

¤a Current address: San Francisco VA Medical Center, San Francisco, California, United States of America

¤b Current address: University of California San Francisco, San Francisco, California, United States of America

Received 2019 Dec 28; Accepted 2020 May 23.

This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

Associated Data

S1 Appendix: Mentor Evaluation Tool. (PDF) GUID: 588F9524-97EF-4F6A-975B-73E4CBD2833C S1 Dataset: MET validation. (PDF) GUID: 98C33F5C-DE3A-43AF-BF46-CC7434E6BB4C Attachment: Submitted filename: Response to Reviewers.docx GUID: BC280F31-1301-459B-956D-1349D48C517E Attachment: Submitted filename: A New Mentor Evaluation Tool- Evidence of Validity.docx GUID: 9320F27D-38F3-47FD-BA90-5C24161D59FD Attachment: Submitted filename: Response to reviewers_5_2020.docx GUID: 2116EACD-DA9D-4AAB-8A45-235D447CDFA7

All relevant data are within the paper and its Supporting Information files.

Abstract

Background

Mentorship plays an essential role in enhancing the success of junior faculty. Previous evaluation tools focused on specific types of mentors or mentees. The main objective was to develop and provide validity evidence for a Mentor Evaluation Tool (MET) to assess the effectiveness of one-on-one mentoring for faculty in the academic health sciences.

Methods

Evidence was collected for the validity domains of content, internal structure and relationship to other variables. The 13 item MET was tested for internal structure evidence with 185 junior faculty from Schools of Dentistry, Medicine, Nursing, and Pharmacy. Finally, the MET was studied for additional validity evidence by prospectively enrolling mentees of three different groups of faculty (faculty nominated for, or winners of, a lifetime achievement in mentoring award; faculty graduates of a mentor training program; and faculty mentors not in either of the other two groups) at the University of California San Francisco (UCSF) and asking them to rate their mentors using the MET. Mentors and mentees were clinicians, educators and/or researchers.

Results

The 13 MET items mapped well to the five mentoring domains and six competencies described in the literature. The standardized Cronbach’s coefficient alpha was 0.96. Confirmatory factor analysis supported a single factor (CFI = 0.89, SRMR = 0.05). The three mentor groups did not differ in the single overall assessment item (P = 0.054) or mean MET score (P = 0.288), before or after adjusting for years of mentoring. The mentorship score means were relatively high for all three groups.

Conclusions

The Mentor Evaluation Tool demonstrates evidence of validity for research, clinical, educational or career mentors in academic health science careers. However, MET did not distinguish individuals nominated as outstanding mentors from other mentors. MET validity evidence can be studied further with mentor-mentee pairs and to follow prospectively the rating of mentors before and after a mentorship training program.

Introduction

Mentorship plays an essential role in enhancing the success of junior faculty. Faculty with mentors report increased productivity, more satisfaction with time spent at work, greater sense of self-confidence about advancement and promotion and ability to be promoted [1–6]. Conversely, previous research has shown that failed mentorship may contribute to mentees not obtaining grant funding and leaving academic careers, among other negative outcomes [7,8]. The National Institutes of Health devoted 2.2 million dollars to create the National Research Mentoring Network dedicated to, among other goals, mentor training and development of mentoring best practices. As a result, an increasing number of academic institutions have implemented faculty mentoring programs [9–13].

Mentor effectiveness is dependent on multidimensional factors and requires more than having a mentor with ideal qualities [14–17]. Assessing mentor effectiveness can help institutions provide feedback to mentors to improve mentoring relationships and in the most extreme cases, identify those pairings that are not working to allow mentees to seek new mentors. The first step in developing such a mentor assessment instrument is to identify the characteristics of effective mentors. Several investigators have performed such research [14,18–21] and identified the following traits as desirable: expertise in their research field, available to their mentees, interest in the mentoring relationship, ability to motivate and support mentees, and advocacy for their mentees.

Several evaluation tools have been proposed to measure mentor effectiveness and competency; however, these instruments have limited utility as they are relevant for specific types of mentors, specific populations, or have not been rigorously validated [14–17]. For example, Berk et al. designed two different scales to evaluate mentors, the Mentorship Profile Questionnaire and the Mentorship Effectiveness Scale [14]. The Mentorship Profile Questionnaire is aimed at research mentors and assessed the nature of the mentor-mentee relationship and specific quantitative outcome measures such as number of publications or grants [14]. The Mentorship Effectiveness Scale, a 12 item Likert rating scale, assessed more subjective aspects of the relationship and qualities of the mentor [14]. Mentees who were nominated by their mentors tested the Mentorship Effectiveness Scale, but the investigators did not perform psychometric testing to provide evidence of validity for either scale. Schafer et al [22] developed a medical student mentoring evaluation tool called the Munich-Evaluation-of-Mentoring-Questionnaire which focused exclusively on medical students’ satisfaction with their mentors. This instrument was tested for reliability and validity, and it was found to be a reliable and valid instrument. Similarly, Medical Student Scholar-Ideal Mentor Scale was developed and provided validity evidence for the score for use in assessing mentors for medical student research projects. [23]

The Clinical and Translational Science Awards (CTSA) mentoring working group identified five mentoring domains and six mentoring competencies in which clinical and translational science mentors could be evaluated [16,17]. The five mentoring domains were: meeting and communication; expectations and feedback; career development; research support; and psychosocial support [17]. The six competencies included communication and relationship management, psychosocial support, career and professional development, professional enculturation and scientific integrity, research development, and clinical/translational investigator development [16]. Based on these domains, they developed the Mentoring Competency Assessment, a 26 item instrument to appraise the effectiveness of clinical and translational (C&T) science mentors [24]. They tested the reliability of their instrument as well as construct validity by performing confirmatory factor analysis of the instrument against the six domains of competencies for C&T mentors [24]. However, the CTSA mentoring group focused on C&T clinician and scientist mentors’ evaluation and they did not include clinician educator mentors in their study. Dilmore et al [15] also focused on C&T science mentors by administering the Ragins and McFarlin Mentor Role Instrument [25] to C&T science mentees; they concluded that it had good reliability and validity evidence in capturing multiple dimensions of the mentoring relationship. Furthermore, Jeffe et al. shortened the 33-item Ragins and McFarlin Mentor Role Instrument (RMMRI) and 69-item Clinical Research Appraisal Inventory (CRAI) to be an easily administered tool to longitudinally follow the progress of junior researchers enrolled in the Programs to Increase Diversity Among Individuals Engaged in Health-Related Research (PRIDE) [26]. These investigators used iterative process of exploratory principal components analysis to reduce the number of items in the RMMRI from 33 to nine and CRAI from 69 to 19 items. The shorter versions of RMMRI and CRAI were able to retain the psychometric properties of the longer version of the instrument.

The instruments described above are limited by either insufficient validity evidence or are useful only with a limited population of mentors (C&T mentors or medical student mentors). None was used to assess mentoring performance of diverse health sciences faculty mentors of clinicians, educators or researchers. To that end, our objective was to develop and provide validity evidence for a Mentor Evaluation Tool to evaluate the effectiveness of one-on-one mentoring whether mentors are health science researchers, clinicians, educators and/ or career mentors.

The construct for the Mentor Evaluation Tool is to measure mentor effectiveness. According to Healy and Welch, mentorship is “an activity in which more senior or experienced people who have earned respect and power within their fields take more junior or less experienced colleagues under their care to teach, encourage and ensure their mentees’ success”. [27] The National Research Mentoring Network defines mentoring as: “A mutually beneficial, collaborative learning relationship that has the primary goal of helping mentees acquire the essential competencies needed for success in their chosen career. It includes using one’s own experience to guide another person through an experience that requires personal and intellectual growth and development” [28]. We acknowledge that other mentoring models that incorporate team mentoring, peer mentoring, and distant and web-based mentoring are also important to mentee success. [7] However, as a dyadic mentoring relationship is often a key component of many mentoring programs, we chose to focus on this context of mentorship as we developed our tool. We focused on the following domains for the tool: expert in the field, accessible to their mentee, interest in the mentoring relationship, ability to support the mentee in career and research. Evidence for validity of an assessment tool consists of five areas: content, response process, internal structure, relationship to other variables and consequential [29]. We focused our psychometric study on the content, internal structure and relationship to other variables domains.

Furthermore, literature review revealed that some academic institutions are utilizing mentorship evaluation tools for selecting good mentors and for academic promotion. The Mentoring Function Scale and the two dimensional scales are used to assess teaching staff mentoring in nursing school and clinical nursing staff mentoring in clinical placement [30]. At the China Medical University, mentors to medical students who performed well earned two credit points out of a maximum of 10 toward their annual teaching evaluations which were used toward academic promotions [31]. Similarly, at the University of Toronto, mentorship activities were noted in the promotion portfolio as part of the faculty’s annual performance review. In addition awards were given to faculty who demonstrated excellence in mentoring [6]. At the University of California San Francisco, excellence in mentorship is being recognized and annual awards are given to mentors in research and in medical education. Mentorship activities are part of the portfolio for academic promotion. A mentorship assessment tool with validity evidence such as the MET therefore is essential to provide objective data on a mentor’s capabilities as a mentor.

Materials and methods

Development of the Mentor Evaluation Tool (MET): Content validity and internal structure evidence

The relevant literature was reviewed to identify mentoring best practice and qualities of effective or admired mentors as well as existing mentor evaluation instruments [14,18,32–38]. Based on the literature review and extensive discussion among the research team and a panel of mentoring experts to identify consistent themes, we initially developed an 18-item mentor evaluation instrument which was pre-tested with 20 School of Dentistry faculty in 2009 ( Fig 1 ). Based on those results, the Mentor Evaluation Tool (MET) was refined to 13-items with a seven-point bidirectional scale. Five items were eliminated from the initial set of 18 due to low variability or near universal endorsement (i.e. ceiling effects) by the mentoring experts developing the instrument. Items were mapped to the CTSA domains.