The Accreditation Council for Graduate Medical Education (ACGME) Outcome Project implemented a major shift in residency education in 2001 by moving to a competency-based format (1). This created a challenge for programs to develop strategies to measure and document resident performance in the core competencies.
An often-used approach has been direct observation of residents and the use of standardized patients. We explored the use of a virtual patient (VP) to augment our assessment of resident skills. The variety of VP programs is vast; many are used to practice medical procedures, fewer for practicing interviewing and diagnostic skills. VP technology ranges from showing images and texts on a computer screen to having a patient-image projected on a screen (2), with some programs accepting oral questions from the learner, recognizing key words in the question, and then producing a verbal answer (3–5). Also, some programs use video and audio clips, including heart sounds, MRIs, and X-rays.
Depending on the VP program, time-lines and patient story-development can be incorporated by unlocking additional questions once the critical question has been asked (6–8). These sophisticated programs typically start at $15,000, with costs quickly rising with program complexity. VP programs offer multiple advantages: they can provide formative or summative feedback and assess clinical reasoning and decision-making skills (9). Individual resident performance can be assessed and compared over time, and with peers, in a highly standardized manner not possible with actual patients or live standardized patients (SPs). Virtual programs have the advantage of consistency when compared with objective structured clinical exams (10). VPs allow the resident to practice—and programs to evaluate—competence in recognizing a variety of diagnoses, especially useful for those not commonly encountered. The ability to incorporate sociocultural, ethical, and religious issues that can influence diagnosis and treatment outcomes further enhances the learning process (4).
Virtual encounters are also psychologically safer for the novice interviewer. It should be noted that VP technology did not replace live SPs, but was incorporated into our program because it could offer several advantages: it is less expensive (e.g., the technology was free, with no fees to standardized patients); it eliminates the need for a preceptor on site; there are fewer time-conflicts for the resident, SP, and preceptor, while offering an efficient, objective way to educate, evaluate, and monitor resident progress in the core competencies. This article describes how VP technology was used to objectively and efficiently evaluate resident performance.
Web-SP Technology Description and Implementation
The VP program chosen was Web-SP (http://websp.lime.ki.se) (11) because it was provided free of charge because of an agreement between the Karolinska Institute (the developer) and our institution and could be adapted for the Department of Psychiatry's use without additional cost. This VP program allowed assessment of medical knowledge (MK), practice-based learning and improvement (PBLI), and systems-based practice (SBP). Institutional Review Board approval was obtained, and residents gave informed consent for the use of their performance data. Web-SP comes with standard templates of patients, both healthy and with common medical conditions. The Healthy Patient's template was adapted to psychiatric cases by KSW on the basis of composites of real patients.
An independent psychiatrist reviewed and confirmed accuracy of the diagnoses. Creating psychiatric cases appropriate to the technology required approximately 4 hours per case. PGY 1–4 residents had access to one VP case for 1 week. Residents completed cases individually but could consult reference material.
The Web-SP software allows the user to access three exam areas: 1) patient interview; 2) physical exam; and 3) lab tests. The authors focused on case-development and evaluation of the patient interview. The Patient Interview section was divided into six areas (present problem, past history, family/social history, review of systems, medications, and allergies). Each section contained predetermined questions, some as part of the original template ("What brings you in today?"), and others created for our cases (e.g., "Have you felt this way before or had other symptoms of depression? Describe exactly how current symptoms and the feeling that you ‘are not yourself’ started/progressed."). There were 167 questions associated with the Patient Interview. Any physical exam procedure and each lab test ordered (35 available) counted as one question. Residents chose which, how many, and in what sequence, the questions were asked. Residents were prompted to provide primary and differential diagnoses, and a treatment plan along with their rationale. Web-SP provides information to the evaluator of the number and order on questions asked and diagnosis, differential, and treatment decisions. Written feedback on the diagnoses, differentials, and treatment plan was discussed with residents. Resident feedback about the case was solicited with a questionnaire using a Likert scale, with 1 indicating "Not Realistic At All" to 5, indicating "Very Realistic." Typical questions were: "How realistic was the case?" and "Was the case an accurate reflection of your skills?"
The VP was a 49-year-old man with major depressive disorder, panic disorder without agoraphobia, and alcohol abuse. All 10 residents completed the case; 9 residents had the correct primary diagnosis; 1 had it in the differential. As a group, they asked 62% of the 167 psychiatric and medical questions. An important differential diagnosis in this case was mood disorder due to hypothyroidism; 8 of 10 residents checked lab values and suspected hypothyroidism. All 10 residents had the correct secondary diagnosis of panic disorder without agoraphobia. An antidepressant was prescribed by 8 of 10 residents, with 2 choosing not to prescribe initially, but preferring to start with supportive therapy. Six residents recommended cognitive-behavioral therapy, and two recommended Alcoholics Anonymous. Additional information concerning resident responses by resident year can be obtained by contacting the first author.
In evaluating this case and the use of VP technology, 100% of residents recommended that this form of evaluation be continued. Using a 5-point scale (5 indicating strong agreement), residents found the use of VP technology to be realistic (mean: 4.1; standard deviation [SD]: 0.70), useful in developing diagnostic/decision-making skills (mean: 3.6; SD: 1.13), and an accurate reflection of their diagnostic and decision-making skills (mean: 4.0; SD: 0.58).
Competency assessment in resident education is critical for developing and graduating effective, capable psychiatrists. The Accreditation Council for Graduate Medical Education asked residency programs to develop strategies for measuring and documenting residents' performance in multiple domains. The results of this pilot project have highlighted some of the benefits and challenges of using VP technology to assess medical knowledge (MK), practice-based learning and improvement (PBLI), and systems-based practice (SBP).
We provided feedback to residents about the correctness of their diagnosis and treatment plan via a written summary. Explanations were detailed for the differentials, treatment plan, and rationale for treatments. Additional discussion allowed critical reasoning skills to be assessed and exploration of whether the treatment plan was comprehensive and aligned with the diagnoses. Although VP technology is no substitute for clinical experience, it does provide a standardized format to evaluate and monitor residents' progress, and expose each to unique/rare situations. Using this particular program created no financial hardship, given that it was provided free. However, there was no financial advantage because we continued to use direct observation of clinic patients and SPs for assessment. The advantages of VP use include enhanced efficiency assessment and documention of resident competence in the domains of MK, PBLI, and SBP. It was a convenient and expedient way to identify gaps in learning and issues related to clinical decision-making. Results from the VP case were used to assess effectiveness of teaching and as a guide for future educational direction. The training program continued to utilize direct observation of resident performance in live situations throughout all 4 years.
There are several limitations to the use of VP technology in residency training. Evaluating the competencies of interviewing and communication skills, professionalism, and patient care is difficult with less technically advanced VP programs such as the one the authors used, because of the inability to observe the resident interacting with the patient. Furthermore, the therapeutic alliance cannot be evaluated, nor can the ability of the resident to ask appropriate diagnostic and follow-up questions. Also, the cases are time-intensive to develop; they need to fit the program structure; and the final product must be carefully inspected to ensure that all questions are entered in the program with the correct responses. Additional faculty time (a person naive to the case) is required to review for quality control. A test case should be developed for residents to familiarize themselves with the computer process before their first formal case evaluation. VP technology is being integrated into undergraduate medical education at a rapid pace, and it is important to explore its suitability in psychiatry residency education. Our initial use of this technology in the psychiatry residency program suggests that it can efficiently assess a resident's medical knowledge and diagnostic and treatment decision-making and that it is accepted by residents as a useful tool for assessment and learning.
This article contains information presented at the Association of Directors of Medical Student Education in Psychiatry (June 2008), the American Association of Directors of Psychiatry Residency Training (March 2009), and the Association for Academic Psychiatry (October 2009).
At the time of submission, the authors reported no competing interests.