In the United States, substandard health status has been documented for African Americans, people of Asian and Hispanic origin, and American Indians, even when controlling for socioeconomic status (1). Our assumption is that practicing culturally competent health care has the potential to reduce these disparities and provide better care for all. Cultural competence in medical education can be characterized as teaching knowledge, attitudes, and skills that help physicians develop rapport and understand the health beliefs and practices of their culturally diverse patients (2). The practice of culturally competent health care requires that physicians be familiar with and appreciate the importance of patients’ beliefs about the cause, course, and treatment of the illness and how those beliefs impact diagnosis and treatment. The development of culturally competent attitudes requires encouraging self-awareness and the examination of one’s own values, goals, and stereotypes and how these may negatively affect patient care. Finally, culturally competent skills such as how to use an interpreter and how to elicit an explanatory model can be taught (2, 3).
The Liaison Committee on Medical Education (LCME) has recognized the importance of cultural competence in health care and added objectives ED-21 and ED-22, which state that medical students must learn to understand the patient’s experience of illness and treatment in a cultural context and to identify and address culture and gender biases in themselves and the effect on diagnosis and treatment (4).
Interest in cultural competence among undergraduate and graduate medical educators has increased over the last 20 years. There has been a concomitant increase in resources to assist physicians and other health care workers to learn about this subject. Ochoa (5) has proposed making such efforts a requirement for the accreditation of health professional schools and journal articles have detailed approaches to teaching culturally appropriate care (6–8). Nonetheless, only 8% of U.S. medical schools offer cultural competence education as a separate course. Despite many advances and initiatives, a survey by Weissman et al. (9), showed that a large percentage of residents were not comfortable providing cross-cultural care and that they lacked the skills necessary to identify cultural health beliefs and practices affecting medical care. Therefore, evaluation measures are needed to detect improvements in cultural knowledge, skills, and attitudes, and they should be specific to the learning objectives for the educational intervention in the medical student curriculum. There is a lack of consensus, however, on how best to evaluate cultural competence education.
Leamon and Fields (10) described the Brief Instructor Rating Scale (BIRS), which is a process evaluation and contains no content evaluation measures. Test questions are usually written by the course instructor of record and there is no reporting back to the lecturers of how well students mastered their specific objectives. Our measure was an attempt to go beyond multiple choice examination questions and get an objective-specific content evaluation that also measured attitude changes and targeted evaluation to a specific presentation.
The purposes of our study were to measure the effectiveness of a presentation designed to increase cultural competence by teaching cultural knowledge about specific ethnic groups; to encourage positive, nonstereotypical attitudes toward patients with limited English proficiency; and to introduce first-year students to interpreting services and teach them interpreting skills. We chose not to evaluate interpreting skills with our measure, which we felt was an inappropriate method to assess those skills.
In 1998, one of the authors (RFL) developed a 2-hour presentation for a first-year medical student course on behavioral sciences and the doctor-patient relationship. We wanted to teach students about the expectations a patient and a physician would bring to the clinical encounter based on their cultures. The presentation, “Cultural Issues in Medicine-Cultural Competence, Patient Expectations, and Using Interpreters,” used excerpts from The Spirit Catches You and You Fall Down (11) to illustrate the differing expectations of Western patients from Hmong patients as an example of cultural influences on the doctor-patient relationship. The presenters were an assistant clinical professor of psychiatry (RFL) and the director of the hospital interpreting service.
The presentation began with a discussion of U.S. demographics and mandates for cultural competence (12–14). That session was followed by a discussion of interpreting, which included how to arrange the seating in the room, who should and should not be an interpreter, and the difference between “word for word” interpretation and “summary” interpretation. A brief 12-minute training video, The Therapeutic Triad (15), was shown. The video presents the case of a young monolingual Chinese woman who presents in the emergency room, agitated, talking about taking a whole bottle of pills (later revealed to be a folk remedy). The vignettes show what can go wrong, such as an unnecessary psychiatric hospitalization, when a trained interpreter is not used as a cultural broker to explain the meaning of the patient’s communication. Pitfalls in interpretation, some of which were illustrated by the video, were addressed in both the film and in the class. In the last portion of the presentation a panel of interpreters representing Hmong-, Russian-, and Spanish-speaking cultures discussed health beliefs and practices in their cultures and took questions from the audience.
In November 2003, two of the authors (MS, RFL) developed an instrument designed to assess the attainment of knowledge and attitude objectives using a pretest/posttest model, with matched pairs of surveys when possible (16). Since this was a learning objective based questionnaire, there were no previously used measures in the literature suitable for our purposes. The pretest was distributed to the 95 students and collected prior to the lecture; the posttest was distributed and collected at the conclusion of the lecture. Due to the limitations of pencil and paper tests to measure interpreting skills, and in order not to overwhelm the students with too many questions, we did not include measures for skills objectives or for all attitude changes sought by the presentation. Institutional review board approval was not required because this was an educational evaluation.
We developed a questionnaire that consisted of nine items, each on a seven-point Likert scale, with 1=strongly disagree and 7=strongly agree (17). Four questions were set up to measure changes in attitudes and five to measure changes in knowledge. For each question, a direction of response (agreement or disagreement) consonant with the objectives of the course was determined in advance and we defined composite measures that summarized the degree to which the student’s attitude and knowledge reflected the course objectives (Table 1
). The questionnaire was administered immediately before and immediately after the lecture, resulting in pretest and posttest measurements for both domains. In addition, students were asked to provide demographic information at the beginning of the questionnaire.
Averages with 95% confidence intervals were computed for the nine individual items and for the composite attitude and knowledge scores for pre- and postlecture. A linear mixed-effects regression model was computed to test for differences associated with domain (attitude and knowledge), time (pre- and postlecture), and demographic variables while accounting for repeated measures within individual students.
We had a response rate of 69% (63 out of 91). Of the 63 respondents, 32 were women, 26 were men, and five declined to state. Their mean age was 24.5 and the SD was 2.23; one respondent did not give an age. The student’s ethnicities were one African American, 26 Asian American, 28 Caucasian, two Hispanic, one of mixed heritage, and five who did not self-identify. Twenty-eight students were monolingual in English, 23 were bilingual, 10 were trilingual, and one was multilingual. Forty-seven students were born in the United States, while 15 were foreign born. Almost half (N=32) of the students had no interpreting experience, while 30 did have some experience. Only nine had any training, whereas 53 had no formal training at all. Of 91 students enrolled, 83 were present and completed questionnaires, of which 63 completed both the pre- and posttest. Those for whom only one questionnaire was available did not differ significantly in any demographic variable. For both domains, the pretest and posttest 95% confidence intervals did not overlap, demonstrating a statistically significant difference (p<0.05). Covariates were related to the composite scores in the regression model (p>0.07 in all cases).
We have demonstrated a content-based evaluation approach, using learning objectives to guide both our instruction and assessment. Since many instructors do not get the opportunity to assess the impact of their particular lecture in a multiple instructor course, our method may suggest a way of getting more specific feedback to quantify mastery of lecture content, and, especially if there was a retention component, measuring mastery at 6 months included in the protocol. Our intent was not to create a generalizable instrument, but to inform medical school instructors how to implement sound educational principles to get content evaluation on a more specific level.
Our results support the effectiveness of a brief, well-designed presentation to increase the knowledge and affect the attitudes of first-year medical students regarding cultural competence in the doctor-patient relationship and in the use of interpreters. Our presentation was effective with both male and female students, with students of all represented ethnic groups, and with students from different countries and with different native languages. Our students showed a greater change in knowledge then in attitudes, consistent with our belief that attitudinal change is more difficult to achieve in using a lecture format. Nonetheless, a change in the student’s attitudes toward culture and interpretation was suggested by our results, particularly in those students who had no prior interpreting experience. In our experience, Caucasian students have sometimes complained that teaching in cultural competence seems “politically correct,” irrelevant, and impractical, but the interpreting experience makes the value of cultural competence with ethnically diverse patients more evident. Our results support the utility of providing exposure and training in interpreting to medical students in the curriculum to increase cultural awareness and competence.
The study has several limitations. Our survey could have included more questions, but we wanted to keep it short, quick, and easy to answer to improve our response rate. Since all first-year medical students are required to attend this lecture, it was not possible to have a comparison group to measure the effects of retaking the test, unless we tested those students who chose not to attend. We believe this was primarily due to the lack of awareness regarding the impact of culture on medical care in those students who had no direct experience of the interpreting process. In addition, some of our attitude questions could be interpreted as beliefs or opinions, but we believe that these are components of attitude and in future iterations of this measure the questions would be reworded to be more specific in eliciting an attitude (18).
Teaching cultural competence in the medical school curriculum can be a challenging task, especially when it is seen as less important than basic sciences. Our experience suggests that it is possible to have a positive impact on first-year medical students with only a 2-hour presentation if it is thoughtfully constructed to demonstrate the importance of cultural issues in patient care. Our study is limited by the use of a self-report evaluation of our learning objectives immediately following the presentation. We could have improved our response rate had we distributed the paired questionnaires to the students prior to the lecture and collected the pretest as they entered the room, and we could have used the absent students as a comparison group. Future studies should evaluate students on the retention and application of the learning objectives at intervals after the presentation, such as their cultural knowledge on a formal exam, their attitudes on culture and interpreting, or the demonstration of interpreting skills in a clinical setting or an observed structured clinical examination.