Academic training centers offer a unique opportunity to provide quality services in an environment informed by the most up-to-date advances in patient care. Patients often seek out academic centers because of this belief. Care at such institutions often includes services from clinicians-in-training—those who have obtained a medical degree and are now receiving advanced training through a residency or fellowship program. Service delivery provided at institutions with training programs is designed to offer equitable levels of care from attendings and clinicians-in-training. Despite this, patients often have mixed opinions about being assigned to clinicians-in-training and some request to be seen by attendings only. This may be due in part to patients’ misperceptions regarding clinicians-in-training (1); nonetheless, it adds to the complexity of providing care in training institutions. Studies have addressed this impact of clinicians-in-training in service delivery from a variety of perspectives, each of which is reviewed briefly.
Adopting training as a core institutional component carries a financial cost. Research addressing this issue has focused primarily on the monetary impact of clinicians-in-training and the utilization of resources in inpatient settings. Wachter et al. (2) studied a hospital-based training setting and found no differences in patient mortality and morbidity and utilization of subspecialty consultation services when attendings and intern/resident assignment structures were varied on inpatient medical units. Decreases in the length of stay and hospital charges were noted when attendings were more involved in care. Replicating Wachter’s study, Kearns et al. (3) found differences in patient length of stay when comparing attending and resident assignment structures. The authors suggested that increased faculty experience, involvement in the service, and use of protocols were factors contributing to these differences. Hayward et al. (4) found no difference between attendings and interns/residents with regard to the use of hospital resources as measured by relative value units, and hypothesized that this may be due in part to the team structure used on the wards which provided a check and balance for standards of practice. Hayward cited the need for analogous populations in future service provider studies to better assess resource usage.
Research assessing patient satisfaction for clinicians-in-training has yielded mixed findings. Probst et al. (5) assessed factors influencing satisfaction (i.e., level of training, wait time, level of attention paid by the provider) and found no difference between patient satisfaction ratings for residents and attendings. However, O’Malley et al. (6) found that satisfaction was greater when patients were seen jointly by residents and attendings compared to attendings only. Yancy et al. (7) assessed patient satisfaction at academic and nonacademic sites and found significantly greater satisfaction with attendings in the academic setting, compared to residents in that setting. No significant differences were found in the nonacademic sites. They speculated that differences in experience and knowledge between clinicians, difference in patient populations between settings, and differences in clinic structure each might have influenced the satisfaction ratings.
Characteristics of Psychiatric Care
Quality of patient care provided is a broad construct with multiple definitions and methodologies of measurement. The study of the characteristics of care provided by attendings and clinicians-in-training in psychiatry has been varied with regard to what factors are investigated and which type of provider is examined. Wise et al. (8) assessed psychiatric residents and community physicians in an inpatient setting and found no significant differences in the medications ordered for patients or the diagnoses utilized. Other studies, however, have identified differences among clinicians with varying degrees of experience. Hansen et al. (9) found that psychiatric residents were less likely than attending researchers to accurately recognize and diagnose specific psychiatric symptoms such as tardive dyskinesia and drug-induced parkinsonism. Further, Meyerson et al. (10) assessed psychiatry residents and found that, compared to senior residents, junior residents admitted patients more often and prescribed outpatient medications for depression more often. These studies suggest that differences in a variety of patient care factors do exist between more and less experienced clinicians.
The current study was designed to expand what has been documented regarding clinicians-in-training and their impact on the delivery of care in academic settings by examining clinical management as one aspect of quality of care for children and adolescents in a psychiatric outpatient setting. The current study differs from previous research. First, studies to date have only examined care in the adult psychiatric population. This study addresses the provision of mental health services to pediatric populations, a complex process involving multiple consumers and sources of information and requiring a unique skill set for eliciting information that yields accurate diagnoses. Second, all of the patients assessed in this study came from the same clinic in which they experienced the same clinical procedures and the same evaluation structure, thereby eliminating some potential confounding factors noted previously (7).
We expected no significant differences would exist between the clinical management of patients assigned to attendings and clinicians-in-training. Specifically, we hypothesized that no significant differences would exist between attendings and clinicians-in-training regarding the amount of information obtained during initial evaluations, the number of postevaluation services recommended, and the type of postevaluation services.
The Child and Adolescent Psychiatry Outpatient Clinic at Stanford University School of Medicine is an academic setting in the San Francisco Bay Area. Each year, mental health clinicians in the Outpatient Clinic evaluate and treat over 800 new patients who present with diverse diagnoses. The outpatient clinic includes four subspecialty clinics: Anxiety Disorders Clinic, Mood Disorders Clinic, Neuropsychiatry/Pervasive Developmental Disorders Clinic, and Attention Deficit Hyperactivity Disorder (ADHD)/Disruptive Behavior Disorders Clinic.
Attendings and Clinicians-in-Training
The attendings group in this study was made up of eight psychiatrists. All attendings had received advanced training in the treatment of children and adolescents and were board-certified. Attendings provided evaluations to 65.3% of the patient study sample (n=280). The clinicians-in-training group was composed of child psychiatry fellows and general psychiatry residents who were completing a child rotation. There were 11 fellows; all were in their first or second year of advanced training for the treatment of children and adolescents and were assigned to the clinic for 1 year, either half- or full-time. They provided evaluations to 28.9% of the patient sample (n=124). Twelve psychiatry residents were included in the clinicians-in-training group. However, the impact of adult residents on patient care was minimal, as they provided evaluations to only 5.8% (n=25) of the patient sample.
The patient sample consisted of 429 children and adolescents who were seen in the four subspecialty clinics over a 1.5-year period. The mean age of the sample was 10.6 years old (SD=4.6). The sample consisted of 65.7% boys (n=282) and 33.1% girls (n=142). The ethnicity of the sample was as follows: 64.8% Caucasian (n=278), 9.3% Asian (n=40), 8.6% Hispanic (n=37), and 5.4% other ethnic groups (n=23). The sample’s primary diagnoses were made based on clinical interviews and fell into the following categories: 23.5% mood/depressive disorders (n=101), 19.3% anxiety disorders (n=83), 15.4% pervasive developmental disorders (n=66), 13.1% ADHD (n=56), 6.5% disruptive behavior disorders (n=28), 4.7% adjustment disorders (n=20), and 11.2% other diagnostic groups (n=48). In terms of emotional/behavior problem symptom severity, the study sample fell within the clinically significant range for the Child Behavior Checklist Total Problems, mean T score 65.5 (SD=10.4). The mean Internalizing Problems T score was 63.3 (SD=11.1), corresponding to the borderline clinical range, and the mean Externalizing Problems T score was 60.1 (SD=12.6), corresponding to the nonclinical range.
The clinic director assigned all patients to service providers. Assignments were based primarily on service providers’ scheduled availability. If a patient requested an appointment with an attending, the request was considered but seldom granted, due to limited availability of initial evaluation slots in attendings’ schedules.
The study data are derived from a larger dataset of the Pediatric Mental Health Outcomes Initiative (PMHOI) in the Division of Child and Adolescent Psychiatry (11). The use of these data was consistent with human subject approval.
Ratings of subjects’ emotional and behavioral problems were assessed using the Child Behavior Checklist, 4–18 year-old version Parent Report (12) to provide a standardized description of the patient sample for each group. The Child Behavior Checklist is a 118-item measure that generates composite scores for Internalizing Problems, Externalizing Problems, and Total Problems. Possible T scores range from 50 to 100, with higher scores indicating greater symptomatology. Comprehensive reliability and validity evidence are available on this widely used measure (12).
The PMHOI Evaluation Record is a clinician-completed form developed for use in the Stanford Child and Adolescent Psychiatry clinic. This record included information on the amount and type of clinical data collected as part of the evaluation (e.g., verbal consultation with teacher, reports from previous provider, psychological testing, laboratory tests), the number and type of postevaluation services recommended, and the patient’s current DSM-IV diagnosis. All attendings and clinicians-in-training received a general orientation to the forms prior to the implementation into the clinic structure.
Evaluations in the clinic consisted of parent and child interviews and inspection of parent- and child-completed measures, which were collected before the initial appointment. Additional components could include record reviews and collateral contacts. Interviews occurred during a 2-hour appointment. Attendings and clinicians-in-training independently completed the PMHOI Evaluation Record immediately following the initial evaluation. Clinicians-in-training reviewed their cases with their supervising attending during regularly scheduled supervision, or sooner as warranted. Based on primary DSM-IV diagnosis, patients were placed into seven diagnostic groups: anxiety disorders, mood/depressive disorders, pervasive developmental disorders, attention deficit disorders, disruptive behavior disorders, adjustment disorders, and other disorders.
Approval to conduct the current study was received from the Stanford University Institutional Review Board for Medical Human Subjects.
Initial analyses were conducted to compare the patients who were assigned to attendings and clinicians-in-training groups. Independent samples t tests or Mann-Whitney tests were computed for patient age, family education level, family income, and Child Behavior Checklist scores. Chi-square analyses were computed for patient ethnicity and diagnosis. Hypotheses were tested using analysis of variance (ANOVA) to examine whether the amount of data collected during an evaluation and the number and type of postevaluation services differed depending on the type of service provider. The effect of patient age, patient gender, patient diagnosis, and all two-way interactions were controlled for in each ANOVA. Initial models included all simple effects and their two-way interactions. When interactions were nonsignificant, they were removed and only simple effects were reported. For all analyses, an alpha of 0.05 was used as the threshold of statistical significance.
Chi-square analyses indicated that there were no significant differences in patient assignment found between the attendings and clinicians-in-training groups for patient ethnicity. Mann-Whitney analysis indicated that patients assigned to both groups also had comparable levels of family education and family income. Similarly, independent sample t test analyses indicated no significant differences for Child Behavior Checklist Total, Internalizing, or Externalizing Problems scores of patients assigned to attendings and clinicians-in-training. However, patient diagnosis, age, and sex differed by group. Chi-square analyses indicated that male patients (χ2=4.5, p<0.05) were more likely to be assigned to the attendings. Mann-Whitney nonparametric analysis indicated that compared to patients assigned to the clinicians-in-training group, younger patients were assigned to the attending group (attendings’ patient mean age=10.3 years; trainees’ patient mean age=11.1 years; Z=−2.5, p<0.01). Patients’ primary diagnosis also differed between attendings and clinicians-in-training (χ2=21.1, p<0.01). Attendings were more likely to be assigned patients with anxiety disorders, pervasive developmental disorders/developmental delay, disruptive behavior, and mood/depressive disorders. There was no difference in assignment for ADHD, adjustment disorders, or other disorders. Given that patient assignment differed for attending versus clinician-in-training groups, the effect of patient age, gender, diagnostic group, and their two-way interactions were controlled for in all subsequent analyses.
Analysis of variance indicated that service providers in the attendings and clinicians-in-training groups collected the same amount of data (e.g., verbal consultation with teacher, reports from previous provider, psychological testing, laboratory tests) during patient evaluations. Similarly, analysis of variance indicated that there were no differences in the number of postevaluation services recommended by attendings and clinicians-in-training.
Service providers in the two groups also did not differ in the frequency with which they recommended further evaluation, psychotherapy, medication management, educational services, further evaluation, or other therapies (such as behavioral services provided in the home).
Overall, this study revealed no significant differences between the evaluations conducted by attendings and clinicians-in-training regarding the components of the evaluation and postevaluation services. Specifically, there were no differences between the groups for the amount of data collected or the number and type of recommendations made postevaluation.
The lack of differences between attendings and clinicians-in-training supports the hypothesis that evaluations conducted by clinicians-in-training at teaching institutions are analogous to those conducted by attendings. Several aspects of the clinical service examined in this study are likely to have contributed to these findings. They are discussed below.
First, the role of supervision plays a key role in both the methodology of these types of studies and the subsequent findings. In this study, as in other studies addressing attendings and clinicians-in-training, there exists interplay between attendings and clinicians-in-training. In a training model, cases seen by clinicians-in-training are supervised by attendings, who influence the management and outcome of each case, thereby creating a potential confound in the data and subsequent findings. However, in the current study, clinicians-in-training submitted the PMHOI Evaluation Record form prior to review with an attending, thereby significantly minimizing the impact of direct supervision on the findings of the study. Nonetheless, it is likely that regularly scheduled supervision impacted the overall knowledge base of clinicians-in-training and positively influenced their ability to make decisions analogous to those made by attendings. A thorough assessment of the type of supervision provided and the information imparted is needed to fully examine the impact of supervision on the evaluations provided by clinicians-in-training.
Second, the use of the PMHOI Evaluation Record form may have positively influenced the outcome of the treatment delivered. By completing the form, attendings and clinicians-in-training reviewed a list of potential sources for clinical data and potential services for recommendation. This process allowed all clinicians, both attendings and clinicians-in-training, to systematically review possible options which might otherwise have been missed or forgotten. Further research is needed to substantiate this hypothesis, but if supported, it would substantiate the use of this type of measure in teaching settings to help ensure the delivery of comparable evaluations to patients.
Finally, the scope of this study was limited by the amount and type of information that could be collected from patients and service providers at the initial evaluation. Information obtained over the course of ongoing and follow-up treatment, the use of clinical protocols, assessment of the quality of data received from collateral sources such as consultations and reports, and feedback from the patients themselves would provide a clearer picture of the overall clinical service provided to patients by attendings and clinicians-in-training.
A hallmark of academic medical settings is the training of the next generation of service providers. Inherent in the model is the need for clinicians-in-training to provide services under the tutelage of supervisors. This study examined several factors to be considered in evaluating the overall impact of clinicians-in-training as service providers and areas in which the transfer of knowledge between attendings and clinicians-in-training is effective. The findings from this study support the premise that patients who are initially evaluated by clinicians-in-training at academic teaching institutions receive care comparable to that which is provided by attendings. Supervision as well as the integration of standardized measurement to monitor evaluations may help to ensure the provision of comparable evaluations from attendings and clinicians-in-training in an academic setting.