Curriculum planning and development is very much on today's agenda for undergraduate, postgraduate, and continuing medical education. The days are past when the teacher produced a curriculum like a magician producing a rabbit out of a hat, when the lecturer taught whatever attracted his or her interest and where the students' clinical training was limited to either a group of hospitalized patients, or those attending the clinics (1). It is now accepted that careful planning is necessary if the program of teaching and learning is to be successful. A curriculum has at least four important elements: content, teaching and learning strategies, assessment processes, and evaluation processes. The process of defining and organizing these elements into a logical pattern is known as curriculum design (1). The Department of Psychiatry, School of Medicine, Kuwait University, delivers block-teaching to the 6th-year medical students who attend psychiatric rotation in three groups; each group consists of about 30–35 students, and the rotation lasts 6 weeks. The teaching, over the years, had consisted of 37 didactic lectures, 25 case-conferences (in which the students, under supervision of their respective clinical tutors, presented patients), and 16-hour, weekly tutor-supervised contact with patients. The assessment procedures included end-of-course (30% of grade) and the annual examinations (70% of grade). The end-of-course examination included multiple-choice questions (MCQ) and a short case (SC) presentation, and the annual examination consisted of MCQs, three short essay questions (SEQ), an oral examination (OE), and a long case (LC) presentation. The end-of-course MCQs consisted of 50 single-best-answer and 10 multi-tailed true/false questions, and the annual examination MCQs consisted of 85 single-best-answer and 15 multi-tailed true/false questions. The SEQs were descriptive in nature, requiring factual knowledge recall about diagnoses and management of common clinical conditions like schizophrenia, depression, mania, substance abuse, organic mental syndromes; and the indications, side effects, and classification of psychotropic medication, including antipsychotics, antidepressants, mood stabilizers, and anxiolytics. The OE required students to know management of common clinical problems such as deliberate self-harm, insomnia, agitated and suspicious patients, and confused and forgetful patients. The short case, oral examination, and long case were administered by two examiners, one being external in the annual examination.
In 2003, the Faculty Administration launched a series of initiatives to upgrade its teaching program. The critical appraisal of the existing curriculum was carried out; the Harvard Medical International, Sydney, Australia, and Dundee, groups were invited to conduct a series of workshops in which to deliberate the upcoming reforms. The changes introduced in the psychiatric curriculum were consistent with the overall objectives, as identified in the new Faculty Curriculum reforms program. The objectives of the study were the following:
We hypothesized that the newly introduced changes, namely the OSCE and the all-clinical, scenario-based short essay paper, would be largely responsible for the difference in scores between the 2007 and 2008 groups.
The sample consisted of all the 6th-year medical students attending psychiatric rotation during the academic years 2006–7 and 2007–8. The total number of students was 96 and 84, respectively, about two-thirds being women, during the academic years 2006–7 and 2007–8.
Description of the Changes Introduced
In view of the reported advantages of the Objective Structured Clinical Examination (OSCE) over the traditional one-off, short/long case examination (2–4), it was decided to replace the short cases at the end-of-course assessment and the OE part of the annual examination, with OSCE. This consists of six stations at the end of the course assessment for each group, and eight stations in the annual examination. An OSCE consists of a series of standardized clinical scenarios, designed to match the Interview and Communication Skills, Clinical Competency, and Professional Development sections, as defined in the Learning Objectives (5–7). The OSCE stations covered a wide range of topics, including insomnia, suicidal risk, cognitive impairment, delirium, depressed mood, explanation of the nature of the illness to the patient, handling criticism, and soliciting support from patients' family members. During the period of training, the interviewer attributes and the interview structure were demonstrated, and the expected competency level was explained and practiced in small-group sessions. Last, the one (of the three) remaining SEQs was also changed to make them all clinical scenario-based questions.
Data were analyzed by SPSS Version 16 (SPSS Inc., Chicago, IL).
We computed the mean scores for each component of the examination for each group of students (i.e., 2007-Year group, 2008-Year group, and the top 10 students for each year). The difference between groups was assessed by t-tests, with a 5% level of significance. All tests were two-tailed.
Comparison of Scores Between 2007 and 2008 Students
First, for the end-of-course tests, the total scores for the 2007 and 2008 groups were similar (22.7% each). In particular, the introduction of OSCE in 2008 to replace the SC did not appear to have a significant impact on the test scores, as compared with the 2007-Year group. Second, in the final-year examinations, the results were mixed. Although the 2007-Year students had significantly higher scores for the essay examination (p<0.02), and long case exam (p<0.007), the 2008-Year group had significantly higher scores for the OSCE (versus oral examination for the 2007 group; p<0.001). Third, when the marks were classified into grades, there was no significant difference in grades between the two groups (χ2 = 8.7; df=4; p=0.07; χ2> for trend = 1.15; p=0.3).
Comparison of the Top 10 Students Between the Years 2007 and 2008
In comparing the top 10 scoring students from the two classes, we found that the 2007-Year students had significantly higher overall scores, both in the end-of-course tests (p<0.01) and the annual examinations (p<0.009). In particular, the scores for the OSCE for the 2008 class were significantly lower than the scores for the oral examinations for the 2007 class (p<0.05). The top 10 students' scores were significantly higher across all the six examination components; the differences were largest in LC, SEQs, MCQ, and OSCE, in that order.
It is important to mention some methodological shortcomings of the study. First, it is difficult to draw comparisons between the 2007 and 2008 students' scores because of the differing examination formats used in the two groups. The comparison can best be described as qualitative rather than quantitative in nature.
Second, the sample size is small, which makes it difficult to draw firm conclusions. Within the constraints of the above-mentioned limitations, our results show that, despite having comparatively low scores in all the domains measuring recall of factual knowledge and also the overall low score, the 2008 students performed better in the assessment components measuring clinical skills and professional development than those involving recall of the factual knowledge. In fact, one of the main shortcomings of the previous assessment format was its limitation in testing the clinical competency of students. The OE tested factual knowledge, a replica of MCQ and SEQs, all measuring the same component, namely, recall of factual knowledge. Since the students were not observed while examining the patients for short cases presentation, accurate assessment of their clinical skills was not always possible. The replacement of OE and the SC with OSCE was meant to add the Clinical Competency element to the assessment tools.
The introduction of OSCE has been a landmark step in assessment of clinical processes including rapport, clinical skills (history-taking, systematic inquiries, performance of the Mental Status examination), clinical communication (data supplied by the student to the patient and responses to the patients' queries, soliciting cooperation of family members), and prescription-writing (7, 8, 16). The “luck-of-the-draw” factor involving the patient/examiner selection in and the examiner-led open-ended format of the Individual Patient Assessment/OE are controlled in this innovative examination format. In a study involving more than 10,000 medical students, the correlation between two examiners in a single patient encounter was less than 0.25 (9). The simulated patients' demonstrating identical behavior ensures standardization and has also been reported to dilute the confounding effect resulting from the patients' variability and the examiners' idiosyncratic marking style in student evaluation (10–12). The introduction of OSCE and reduction of didactic lectures and their substitution with teaching OSCE sessions were meant to include Competency and Professional Development domains into the psychiatric clerkship.
The OSCE stations covered most topics identified among the “minimal competencies” articulated at the beginning of the rotation; the rating scales contained diverse items involving theoretical as well as clinical processes, such as interview and communication skills, the mental status examination, information-giving, and patient reassurance; and our actors (psychiatric nurses) were well versed with the various psychiatric presentations, which were adequately enacted and rehearsed. The lower scores of the top 10 2008 students in the OSCE than the OE scores of the 2007 students were a matter of concern, since it had limited their chances for possible distinction awards—the obvious implication being that either the OSCE was not suited to identify top students, or else, the top students, with a sound factual knowledge-base, were not necessarily equally skillful performers. This may also partly explain why the OSCE, among all the assessment components, was the least discriminative between the top 10 and the rest of the 2008 class (Table 1). It must be said here that the binary checklist used in our OSCE gradings, may not have been ideally suited to assess cognitive skills requiring synthesis, integration, organization, and application of factual knowledge in a given clinical situation: the likely attributes of top rather than the average/below average students. This is supported by the earlier reports that the use of binary checklists for marking the OSCE, may be insufficiently sensitive to detect some of the more advanced psychiatric skills, such as empathy, rapport, and ethics (8, 13). The use of binary scales for the content-specific theoretical knowledge and the global scales for the clinical process (8, 14, 15), planned for future marking, is likely to minimize this limitation of the OSCE.
TABLE 1.Comparison of Mean Scores for 2007 and 2008
It has been stated that the validity of the psychiatry OSCEs may have certain limitations involving measurements (i.e., content-specific checklists may not be able to capture the nuances of a psychiatric interview), time (i.e., the time allocated to complete an OSCE station is often much shorter than the time required to do a psychiatric interview in the clinic), and complexity (i.e., complicated psychiatric presentations are difficult to simulate (15)). The advanced competencies, such as therapeutic alliance, transference issues, and synthesizing biopsychosocial factors, can be difficult to properly assess in OSCEs (8, 15). Unlike OSCEs in other specialties, such as general surgery, which can rely upon either real patients or actors, the psychiatry OSCE depends entirely upon performance of actors. Complex presentations like thought disorder are difficult to replicate (8). The Royal College of Psychiatrists, U.K., having introduced OSCE in the Part 1 Membership Examination since 2003, maintains that the Individual Patient Assessment will continue to remain an essential component of the Part 2 Membership Examination (16–18). We believe there is a case for retention of long cases as an assessment tool. The LC examination promotes a holistic approach in making diagnostic formulations, integrating biopsychosocial and comorbid data, and devising management plans geared toward individual patients care. We decided to retain the LC, which, in combination with the OSCE, diversifies assessment of professional competency.
Since the OSCE has just been introduced in the psychiatry clerkship, it is too early to judge its overall impact. However, it marks the beginning of the reform program aimed at delivery of a more dynamic, problem-based, rather than department-based, teaching program, involving clinical skills and professional development as its essential components. Further studies to determine validity and reliability of the OSCE are under way.