To improve the quality of graduate medical education for physicians, the Accreditation Council for Graduate Medical Education (ACGME) implemented numerous changes in accreditation standards in recent years. Specifically, medical residency programs are being held to increasingly higher educational standards for assessing residents and ensuring educational program quality and outcomes. Future plans detail even higher standards, including expectations that programs will utilize data-driven methods for evidence-based program evaluation and improvement (1). Although there is a great deal of leeway in potential methods to assess resident competence (2), there is little previous research reporting how various sources of data were used for residency program and curriculum improvement. Knight et al. (3) explicitly describe a process for utilizing multiple sources of program data for program evaluation and improvement rather than using such data only for assessing individual resident competency.
Lee et al. (4) have suggested a potential blueprint for surgical residency programs that incorporates multiple assessments by multiple stakeholders using multiple tools to align curriculum objectives with program outcomes. Amery and Lapwood (5) have shown the importance of using multiple assessment tools to get a broad perspective of physicians’ educational needs. They found discrepancies in physicians’ self-assessments of competency and in physicians’ educational needs as noted in their personal diaries and concluded that these discrepancies had important implications for curriculum development. Therefore, using a multidimensional and reflective approach to self-assessment, such as educational diaries or portfolios, may yield valuable additional data for program and curricular improvement, especially when examined in context with other assessment data.
In a study of the quality of psychiatric residency programs, Yudkowsky et al. (6) have described how the multiplicity of stakeholders in a program complicates the task of evaluating the quality of the residency program. They found that two of the stakeholders—residents and program directors—have different definitions of program quality based on their differing perceptions of the program’s content, context, culture, and consequences (6). Both program directors and residents appear to value curriculum and clinical resources, but program directors tend to be more concerned about content-related aspects of the program (i.e., faculty supervision, classes, evaluations), while residents tend be more concerned with consumer-oriented issues related to the context (i.e., reputation, mission) and consequences (i.e., outcomes such as passing board exams) of the program.
This previous literature suggests a need to investigate the utility of a multidimensional approach to evaluating the curricular content and quality of a residency program. Few studies have specifically investigated the perspectives of faculty and residents regarding their perceptions of program teaching quality and the competence of residents in the content areas taught. The purpose of this study was to compare faculty and resident physician perceptions of teaching quality and resident competence for 13 core psychiatric skills and the six ACGME general competencies. Additionally, resident performance as assessed by portfolios was descriptively examined relative to these perceptions of skills and competencies. This combination of faculty, resident, and portfolio data potentially offers a practical approach to identifying areas for improvement within a program curriculum.
This study was conducted at the University of Arkansas for Medical Sciences Psychiatry Residency Training Program and was approved as exempt by the university’s institutional review board. All 10 core teaching faculty for the department and 18 residents (82%) from postgraduate years 1–4 completed parallel surveys.
A team of educators within the department designed the parallel surveys that were administered to both residents and faculty to assess both perceptions about the quality of teaching and resident and faculty perceptions of residents’ competence, specifically for this program. The areas included in the survey were derived from faculty-identified competencies previously described (7, 8) and the ACGME competencies. The surveys consisted of 19 questions: 1 each for the 13 core psychiatric skills and the 6 ACGME general competencies. The 13 psychiatric skills were determined by faculty and residents within the department using the residency review committee requirements for psychiatry (7, 8). The resident survey items were worded as follows: “Rate the program in teaching you the skill, and rate your level of competency in the skill.” Faculty survey items were worded as follows: “Rate the residency program in general for teaching the skill, and rate the competency of the average resident on completing the skill.” Response options were based on a 5-point Likert scale, with 1=poor and 5=excellent. To enhance the participation of faculty and residents, all surveys were administered anonymously.
As described previously, the psychiatric residency program at the University of Arkansas for Medical Sciences instituted a novel performance measure known as showcase portfolios to evaluate competency in designated areas specific to the practice of psychiatry (7, 8). Each entry for the portfolio includes documentation and an accompanying resident-composed cover letter, which reflect actual resident work within each of the 13 psychiatric skill areas. Specifically, an entry may contain documentation such as progress notes, letters, radiological reports, legal documents, laboratory values, or anything relevant to the skill area being showcased; importantly, each entry is accompanied by a resident-composed, self-reflective cover letter. Previous research has established the reliability and validity of this portfolio tool (9), showing that portfolio scores were modestly correlated with psychiatric knowledge as assessed by scores on the Psychiatry Residency in-Training Examination. Generalizability analyses found that the score for the whole portfolio can be reliable with six entries and two raters, and that portfolio scores tended to improve with years of training (9). After the ACGME general competencies were instituted, the University of Arkansas for Medical Sciences psychiatric residency program broadened the utility of the showcase portfolio by demonstrating that it also reflected resident performance within the six ACGME general competencies (8).
Two board-certified psychiatrists, external to the residency training program, were trained to score portfolio entries. For the 13 psychiatric skill areas, the scoring rubric was a 6-point ordinal scale, where 1=not competent, 3=competent, and 6=highly competent. Resident performance was measured by averaging the two raters’ scores for each psychiatric skill portfolio entry. For each of the portfolio skill entries, the raters also scored the six general competencies on a 3-point ordinal scale, where 1=below competent, 2=competent, and 3=above competent. The two raters’ scores were averaged for each skill and we then averaged across all of the entries for a given resident to obtain a measure for each of the six general competencies. The raters scored a total of 177 portfolio entries representing all but one (treatment modality) of the 13 psychiatric skill areas.
Mean scores for perceptions of teaching quality and resident competency were computed and compared for faculty and resident responses to each survey. Mann-Whitney U test statistics were used to evaluate differences between faculty and resident responses (p<0.05). These anonymous mean scores also were descriptively examined relative to the median ratings of resident performance as assessed by portfolio ratings made by trained raters.
Faculty and resident perceptions of teaching quality for the 13 core psychiatric skills are summarized in Table 1. There were no significant differences in faculty and resident perceptions of the program’s teaching for these core psychiatric skills. Mean scores for nine of the 13 skill areas were rated as average. Both faculty and residents rated the skill area of neuropsychiatry as slightly below average. Teaching for the skill areas of communication and psychotherapy were rated as slightly below average by faculty only. Only one teaching area, initial treatment/diagnosis, was rated by both faculty and residents as being taught above average
Faculty and resident perceptions of teaching quality for the six ACGME general competencies are summarized in Table 2. There were no significant differences in faculty and resident perceptions of the program’s teaching for these general competencies. Mean scores for all six competencies were rated at least average by both residents and faculty. Teaching for one competency, patient care, was rated slightly above average by faculty, with a mean score of 4.4, whereas teaching for the competency of professionalism was rated slightly above average by residents, with a mean score of 4.0.
Table 3 summarizes faculty ratings of average resident performance, resident self-evaluation of performance, and median portfolio ratings for core psychiatric skills. The faculty rated residents’ competency in all skill areas as at least average and rated competency in initial treatment/diagnosis as slightly above average. Three skill areas (initial diagnosis, legal issues, and treatment course) were rated as competent by faculty and residents, but residents received ratings of less than competent in these areas for entries in their portfolios. No portfolio entries were submitted to assess performance in treatment modality.
There were significant differences between faculty and resident perceptions of competency in biopsychosocial formulation and medical psychiatry. Both faculty and portfolio performance ratings were competent for biopsychosocial formulation; however, residents rated themselves as slightly below average. For medical psychiatry, even though there was a significant difference between the two groups, both faculty and residents rated competency as slightly above average. Portfolio performance also indicated competence in this area, with a median score of 3.
Table 4 summarizes faculty ratings of the average resident performance, resident self-evaluation of competence, and portfolio ratings for the six ACGME general competencies. There were no significant differences between faculty and resident perceptions of competency for the six ACGME general competencies (all Mann-Whitney U statistics, p>0.05). Mean scores by both faculty and residents reflected average ratings for residents’ performance in five of the general competencies. Faculty rated residents’ performance in patient care as slightly above average, with a mean score of 4.3. Portfolio entries received ratings of competent.
The primary focus of this research was to examine the utility of using a multidimensional, multiperspective assessment approach to identify areas for improvement within a medical residency curriculum. We examined faculty and resident perceptions of teaching quality and resident competency as well as competency data derived from portfolio ratings. Results suggest that residents perceive they are competent and faculty believe they are teaching core psychiatric skill areas and the six ACGME competencies, but the combination of faculty, resident, and portfolio data shows incongruence among these three data sources. Overall, this incongruence was evident in three skill areas: initial diagnosis, legal issues, and treatment course. Although both residents and faculty perceive residents as competent in these areas, the portfolio data indicated that residents presented entries indicating below competent performance. Importantly, portfolio ratings for neuropsychiatry skills, which were below competent, were congruent with below average perceptions of competency and teaching ratings by both faculty and residents. In comparing faculty and residents, a general pattern emerged of faculty rating resident competence as slightly higher than residents rated themselves. This was significant in the skill areas of biopsychosocial formulation and medical psychiatry.
The comparison of several sources of data in our study yields evidence of different perspectives of faculty and residents regarding their perceptions of competency versus actual performance, as assessed by resident portfolios scored by trained raters. In considering the credibility of these sources of information, it is useful to think about these findings relative to what is known about physicians’ self-assessment and portfolio assessment. The literature has shown that physicians tend to be inaccurate self-evaluators (2, 9). Weak to nonexistent associations have been found between physicians’ self-assessments and external assessments of physician competence (10). In terms of the credibility of portfolio assessment, O’Sullivan et al. (9) have previously demonstrated the reliability and validity of the University of Arkansas for Medical Sciences psychiatry portfolio process, which uses trained external raters.
Davis (10) specifically calls for externally guided self-assessment processes, such as portfolios, to improve physician performance. Portfolios as a multidimensional, self-reflective, externally rated assessment tool can be a good method to identify areas for program improvement, especially when combined with resident and faculty perspectives, to identify where any discrepancies may lie. Other investigators have also noted how incorporating external reliable and valid assessment measures, such as experts or trained raters, in performance assessment activities can be an effective source of external information to guide physician self-awareness (11).
Additional research in this area would be useful. The study findings of below average performance in initial treatment/diagnosis, legal issues, and neuropsychiatry should be interpreted cautiously because of the small sample size in each area. Also, this study was limited to psychiatric residents at one institution and may not be generalizable to other programs or specialties. However, the process of comparing multiple perspectives and measures to evaluate program quality and areas for curricular improvement could be adapted by other specialties. In addition, finding differing perceptions between residents and faculty regarding resident competency and actual performance is interesting. Other specialties may identify specific skill areas and administer a survey about the quality of program teaching, residents’ perceived competence in the identified areas, and faculty perception of the average resident’s competence in those areas.
Our findings highlight the usefulness of a multidimensional approach that includes different perspectives and externally rated performance measures to assist in program evaluation and improvement. Deficiencies in resident physician performance may be associated with weaknesses in a residency program’s curriculum, which is the educational structure for the content, experiences, and supervision that a resident receives. Recognizing areas of incongruence may facilitate further exploration of residents’ self-evaluation as well as teaching quality. Because of the multiplicity and multidimensionality of various stakeholders involved in residency education, it can be beneficial to maintain a systematic process in evaluating multiple data sources. The ACGME suggests that programs maintain a process for using assessment results with other program evaluation results to improve the residency program (1). Our study creates a possible template for incorporating these two requirements, utilizing multiple sources for the evaluation process in a systematic manner.
At the time of submission, Drs. Gathright, Thrush, Jarvis, Clardy, O’Sullivan and Ms. Hicks disclosed no competing interests. Greater than 2 years ago, Dr. Cargile was on the Speakers’ Bureaus of Pfizer and Eli Lilly.