The articles in this special issue of Academic Psychiatry provide laudable attempts to bring psychiatric education into the twenty-first century. Since clinical practices, data and theories are swiftly changing (not always for the better), the authors who contributed to this issue confronted a Hydra-headed task.
Until recently, direct, fee-for-service, private medical practice predominated. The doctor could, in principle, regulate the time and effort addressed to each patient. Professional education consisted of learning the tools of the trade and staying current with the increased and transforming information. The articles in this issue emphasize the debate on postgraduate education and on mastering new practice-relevant evidence.
Realistic education must address the realities of conflicting educational interests as well as the limitations of current practice. Managed care and insurance requirements severely limit professionals’ ability to set their own timetable. This constricts the possibility of adequate diagnostic evaluation and, more crucially, the ability to closely monitor the patient’s progress (or regress). Because of insufficient knowledge, much practice remains at the trial and error level, and thus mistakes will occur. Good care requires the ability to detect mistakes, most of which are rectifiable, given the opportunity for early detection. However, the reimbursable "Med-Check" allows only a superficial grasp of fluctuating clinical reality. This amounts to de-professionalizing medical practice, since patients’ needs conflict with financial limitations on their care. Additionally, it is difficult for students to invest in learning difficult techniques when they know that the circumstances defining actual practice prevent their use. Collectively, the articles in this issue do not address the problem of putting available knowledge to actual use.
Furthermore, the apparent flood of psychopharmacological and biological knowledge still does not provide an adequate basis for a knowledge-based psychiatric practice. Most psychiatric interventions still depend on conventional wisdom, flimsy as that may be.
Given the evidence-based medicine (EBM) emphasis on teaching tools for searching the Internet’s databases for relevant facts and the proliferation of algorithms and guidelines, my emphasis on the insufficiency of the knowledge base for psychiatric practice may appear overdrawn. However, it is important that we determine which questions our current knowledge base answer objectively.
Such questions should address whether a particular psychopharmacological agent has been determined to be safe in a sample of relatively healthy, well-defined, uncomplicated patients and whether it is statistically superior to placebo a properly controlled double-blind randomized trial.
Questions that are partially addressed or completely unanswered include:
1) Should maintenance psychotropic medication doses remain at the normal treatment level, or should they be lowered?
3) What are the differential indications for choosing one treatment plan over another?
4) What is the usual drug regimen, given the normal treatment response?
5) Is there a substitute for titrating a dosage upward, to the point of disturbing side effects (e.g., establishment of therapeutic blood level ranges appropriate to the patient’s age, sex,)
6) Should treatment fail, what should be the next course?
7) What is the necessary frequency of monitoring during maintenance?
8) How much time should be available for an adequate survey of the patient’s current status?
9) What are proven educational techniques for patients and their families?
The apparent factual body of knowledge from clinical trials also consists of major problems
10) First, the patient sample may or may not allow generalization to a particular patient, who is dogged by comorbidities.
11) Second, statistical significance is a long way from clinical significance. Given the sample sizes of pivotal studies, miniscule clinical effects may be anointed as statistically significant.
12) Third, and most troublesome, is the accumulating evidence for the allegiance effect. That is, those sponsors and investigators with a positive incentive for finding efficacy are most likely to do so. (This applies to psychotherapy as well as to pharmacotherapy.) Does that mean that these sponsors and investigators are doing it better, or that the innumerable opportunities for bias that can arise—even within well-designed studies—have not been adequately prevented? Independent replication by those without outcome investments, or—even better—collaborations between those with opposing investments, are clearly desirable, but not a reality.
This may all seem irrelevant to teachers. But is it not their job to do their best in keeping students up to par using whatever materials are available? In this special issue, the authors highlight procedures that may be useful in achieving this primary goal of education and training.
However, both the practitioner and our patients require a fuller understanding of the limits of our knowledge. Perhaps overoptimistically, I hope this commentary will help broaden the scope of academic and federally supported research to provide a proper foundation for improved education and practice.
A number of articles in this volume raise debatable issues for focused discussion. I will elaborate on these articles in the following section.
Ira Glick (4) was central to the development of both the American College of Neuropsychiatry and the American Society of Clinical Psychopharmacology (ASCP) psychopharmacological curricula. His recognition of the generally poor state of psychopharmacological teaching and practice motivated these efforts. The burgeoning popularity of psychopharmacological continuing medical education (CME) attests to the general belief that keeping current is a substantial burden. Despite being steeped in journals, the actual reading time spent by practitioners is negligible. However, didactic CME programs are poor transmitters of information.
Glick and Sidney Zisook (4) present two objectives. The short-term goal is improving the teaching/learning process, a central concern of all the articles in this edition. However, Glick and Zisook emphasize that the most important long-term objective is improving clinical practice.
Glick and Zisook refer to the dearth of neuroscience teaching. Is neuroscience sufficiently advanced to substantially impact psychiatric practice? The fact that psychotropic agents affect neurotransmitters is interesting and gives a scientific "glow" to our explanations to patients but does not explain practice decisions that rest on neuroscience as a body of knowledge. Neuroimaging contributes to neurological diagnosis, although normative standards are still being developed. By providing objective evidence to legislators suggesting that psychopathological states are not simply deviant behavior, functional abnormalities, or weaknesses of will, the major current impact of psychiatric neuroimaging becomes political.
Glick’s survey of utilization approaches the overriding question of detecting a therapeutic signal. Does this educational innovation seem to make matters better? However, the articles in this issue do not call for controlled experimentation in teaching or outcomes research that addresses actual clinical practice, which is the common plight of all educational programs.
The profound lack of experimental validation of educational and social programs is slowly being recognized. The Cochrane collaboration emphasizes objective summaries of evidence-based medicine. The less known Campbell collaboration attempts to bring together literature on social and educational experimentation. A brief perusal of their files indicates that the merging of these two areas has made way for a new field (1).
Pointing to high failure rates in both oral and written specialty board examinations, didactic teaching is blamed. This failure rate is particularly unnerving since board examinations are poor surrogates for proper practice.
Zisook et al. (5) argue that there is information overload, but the amount of fact relevant to the complexities of clinical practice is dismayingly small. Regarding genomics, neurotransmitters, receptors, and imaging, there are remarkable, swiftly increasing bodies of data, but their proper place in a clinical curriculum is arguable. The authors recommend an attitude of continued learning, combining the skills to seek, analyze, and utilize information effectively. However, they do not address whether this is possible once in practice.
Zisook et al. recognize that even if something is theoretically sound and has a rational appeal, given current theory, it does not mean that it is useful. The same criteria that permit declaring a drug safe and effective are applicable to educational methods, although they require a different level of organization.
The goal of applying critical and statistical expertise to the therapeutic literature also applies to the pedagogical literature.
Discussing who should be teaching psychopharmacology, Steven Dubovsky (6) remarks, "The most important attributes in the effective psychopharmacology educator are knowledge, enthusiasm, honesty, an ability to encourage critical thinking, and genuine interest in the student." Is there any subject who would not benefit from such a teacher? That quality assurance in psychopharmacology education should be "based on meaningful and detailed peer review" seems reasonable but ignores the entire literature indicating the uncertainties of peer review. Dubovsky also maintains that resident psychiatrists can master more advanced topics such as the mechanism of medication action and how to evaluate clinical trials critically.
However, the pragmatics of prescription writing cannot depend on understanding the mechanisms of medication action since these mechanisms remain obscure.
Critically evaluating clinical trials is extremely desirable, but my attempt to teach in this area was not extremely successful. This, of course, may have been my fault, but a number of colleagues have reported similar disappointments. Our current hypothesis is that we are bucking the tide regarding students’ concerns. Most students wish to be taught useful procedures for their practice. The possibility that they may be forced to evaluate new, strange procedures in the future is a turnoff. They look about them and see many professionals, who do not show any evident attempt to keep current, practicing successfully. This discourages the student from learning complex critical tools.
James Jefferson (7), quite properly, argues that many useful medications have fallen off the radar screen simply because they are off patent and not advertised or detailed. I almost agree with him entirely, except when he states that, "There are so many better tolerated, safer alternatives available. A few patients will need an [monoamine oxidase inhibitor (MAOI)] and few clinicians will have occasion to prescribe one." Current data indicates that MAOIs are highly specific for patients with early onset or chronic atypical depression and that other medications do not do the job. The proportion of atypical depressives in an outpatient depressive population is approximately 30%. Therefore, a fair number of "refractory depressives" are simply mistreated depressives. This section could have, however, amplifed the discussion of the reversible MAO inhibitors, whose utility should not be confined to a few specialists.
Jeff Huffman and Anna Georgiopoulos (8) provide a useful outline of their, apparently, quite superior training. What comes through clearly is the desire for frequent, knowledgeable, supportive, hands-on, care-oriented supervision. It is not so clear whether they recognize that, at times, such "supervision" obscures the lack of factual knowledge, and trainees should also be able to distinguish useful facts from authoritative opinion based on "clinical experience."
Carl Salzman (9) directly challenges the usefulness of algorithms and guidelines by citing their questionable reliability and validity. Furthermore, he suggests that guidelines may impair decision making by neglecting prescribing factors important to psychiatry. However, data demonstrating the actual utility of these factors is slim to nonexistent. This does not contradict Salzman, however, it simply reasserts our ignorance.
Salzman argues that guidelines commonly lead to patients taking large doses of several different classes of psychotropic drugs since some symptoms are very difficult to treat. "With more medication, patients experience more side effects but only modest therapeutic benefit." This may be true, but the use of several, possibly supplementary, medications warrants investigation.
Salzman adds that, "Preconceived recommendations do not permit the development of clinical skills that can appraise the medication needs of any individual patient, which is a learning task of a trainee." However, does the trainee have an evidential base to rely on when addressing this task?
In his argument, Salzman is apprehensive that treatment algorithms and guidelines will become the whole of psychiatric treatment. More consistent practice patterns may come at the expense of reducing individualized care. As usual, there are no available data. Unfortunately, much of what passed for individualized care in the days of psychotherapeutic dominance was neither individualized nor was it care, although it was often touted as clinical wisdom. The attribution of clinical wisdom is frequently eminence based.
Salzman accurately states that good psychopharmacology requires detective skills but does not address whether the detective has enough time to gather appropriate data. Relying on expert opinion-based algorithms and treatment guidelines is relying on a weak reed. However, relying on clinical experience for developing clinical wisdom may also be a weak reed. Perhaps Salzman should not worry too much about the promulgation of guidelines since they are largely ignored in practice (2).
David Osser et al. (10) describe a 3-year psychopharmacology course consisting of thirty two, 2—3 hour case conferences and reviews of related articles. They agree that research comparing the attitudes and practice outcomes of their graduates with recipients of other training methods is needed.
They also maintain that a greater barrier to accepting such training may be physicians’ resistance to the discipline required, although there is limited evidence to support many clinical decisions.
However, these are not unrelated problems. If there were firm, useful evidence for many aspects of practice, motivation for a time consuming arduous discipline would be enhanced. But why knock yourself out if the payoff is likely small?
The authors do not agree with the concept of EBM, rejecting it as cookbook medicine, even though "curiously, [they] are likely to agree with the specific recommendations in the guidelines." This does not seem curious but rather reasonable. Agreeing that EBM amounts to being held to a standard. If this standard inadequately informs practice realities (as well as requiring a lot of work and time), why incur the risks of a burdensome oversight possibility? The vagueness of the "usual and customary practice" criterion, amplified by the "respect for a minority" clause, effectively removes professional oversight from anything except gross negligence and malpractice suits.
Treatment sequencing seems the most obvious area for guidelines to help. The conscientious practitioner has a pretty good idea, once a diagnosis is made, as to likely effective treatments. So algorithmic consultation is unlikely at that point.
The problem starts when the initial treatment does not do the job. For instance, in the Osser et al. algorithm for depression, at various decision points the reference is to "experts think," indicating an inadequate factual reference base. However, finding factual support for such decisions often forces reliance on thin material.
For instance, whether the patient has an atypical depression does not arise until after the patient failed a trial of a tricyclic, at which point an MAOI is recommended. Bupropion is proposed as an alternative, if an MAOI trial is judged too arduous; but this reference is not MEDLINE available and is not a placebo controlled trial. There is no reference to reversible MAO inhibitors.
References may create an aura of pseudoexactness. Our field requires expert opinion for guidance, but references to a largely nonexistent database should be avoided.
Osser et al. argue that "random assignment of trainees to different training approaches would certainly be impractical"; but the random assignment of matched sites is recommended educational evaluative practice. Further, formal tests for the ability to assess the validity of a paper are possible.
James Ellison (11) clearly states that the pros and cons of "collaborative therapy" are still highly controversial and fact poor. Nonetheless, he also affirms "that collaborative treatment is an accepted and effective practice." "Accepted" seems accurate, but "effective" remains to be shown. Much of his discussion is quite sensible, and it seems clear that his major thrust is toward advocacy rather than evaluation.
Peter Weiden and Rao (12) delineate a sensible theory illuminating the major problem of noncompliance and suggests adding a countervailing curriculum. He argues that it is well worth it. This is one of the most testable innovations in psychiatric education. A randomized study, checked by appropriate urine or blood tests, comparing Weiden’s thoughtful curriculum with ordinary education across matched sites, would provide a trenchant answer.
Amy Brodkey (13) and Paul Mohl (14) present similar views. They correctly state that the pharmaceutical industry extensively teaches psychopharmacology to trainees and, further, fosters good will and possible differential prescription practices via lectures, lunches, gifts, etc. On less firm ground it is stated that industry promotion causes demonstrable harm to trainees, the public and the profession.
Mohl refers to "the large amount of evidence that drug company interventions influence physician practice at all levels(1—7). " However, the cited references actually provide little support for this sweeping conclusion.
These authors affirm that the medical profession must assert control of medical education by drawing a firm barrier between commercial and professional pursuits. One is always concerned that he who pays the piper calls the tune. Those directly paid by industry may well avoid biting the hand that feeds them, so more transparency about speaker and author income might be useful. However, raising the audiences’ level of envy and suspicion will not illuminate the specific claims that need correction or contradiction.
Truth telling would be better served if lectures and journals regularly presented contrasting views about controversial topics, thus fostering concurrent debate. Hopefully informative, this would certainly be less boring than the usual ineffectual didactics.
The enormous expenditures on marketing directly to physicians are taken to imply that this effectively causes the physician to differentially prescribe, according to the pitch given him. Perhaps so, since medications that have lost patent protection, such as the tricyclics, lithium and MAOIs, go largely unadvertised and unprescribed. But, within currently popular drugs, is there a differential prescribing effect or is marketing trapped by an escalating, competitive arms race?
What pharmaceutical house could admit that their efforts have no substantial effect upon physician prescribing, since physicians were trained by years of TV to disbelieve advertising claims? The marketing manager says, "We are in the Red Queen dance, running as fast as we can just to stay in the same place. If we stop running we will be forgotten. We must say, ‘Here I am; do not forget me.’" That medical marketing is persuasive with regard to differential prescribing between competitors has not, in my view, been demonstrated.
Differential safety is another matter. The prescriptive superiority of selective serotonin reuptake inhibitors (SSRIs) over tricyclic antidepressants (TCAs) of valproate over lithium, and the neglect of MAOIs, appears largely driven by doctors’ safety concerns.
That postgraduate education will remain a commercial enterprise is likely unless countervailing pressure develops. The academic teachers of psychopharmacology might accept an obligation for educating the patient support groups who can, in principle, provide this pressure.
The idea of using randomized trials across matched sites for the evaluation of educational practices should meet with a sympathetic audience in academic psychopharmacologists. However, it might be thought too novel to bear fruit. Fortunately, the remarkable book "Evidence Matters: Randomized Trials in Education Research," edited by Fredrick Mosteller and Robert Boruch, provides provocative histories and suggestions (3). They aptly cite Walter Lippman, who states the future is imperiled "by leaving great questions to be fought out between ignorant change on the one hand and ignorant opposition to change on the other."