Proposals made under the umbrella of health care reform to reduce and limit the number of nonprimary care residents, including psychiatry house staff, have generated considerable concern. Although large-scale, top-down, federal health care reform is now stalled, resurrection may occur in the future. Mechanisms initially suggested for decision making with respect to mandating the size and number of future residency programs, currently on hold, included using the Residency Review Committees (RRCs) or a federal commission to make these judgments based on the quality of training. The Veterans Administration (VA), a major influence in postgraduate training, is already moving ahead on a broad national front to reduce the number of specialists relative to generalists trained in its facilities. These actions will inevitably impact all house staff training in academic medical centers affiliated with the VA. In response to the possibility that more comprehensive federal regulation might be enacted in the future, the Executive Council of the American Association of Directors of Psychiatric Residency Training (AADPRT) appointed a task force to examine issues related to assessing the quality of residency programs, and, if necessary, to offer policymakers informed discussion and recommendations.
At the January 1994 meeting of the AADPRT, the task force organized into a steering committee and five work groups: 1) Framing the Debate: to describe background issues and the context in which our recommendations should be understood; 2) Estimating the Need for Psychiatrists: to independently examine available data and analysis on how many psychiatrists might be necessary for the American workforce; 3) Defining Quality in Psychiatric Residency Programs: to suggest definitions and possible dimensions along which quality might be examined; 4) Methods Used to Assess the Quality of Psychiatric Training: to review previously used assessment tools; and 5) Options: to examine alternatives by which judgments on the quality of psychiatric training programs and decisions about regulating the size and number of programs might be made.
Each work group prepared a written report that contributed to this summary document. Here we briefly summarize the work groups' reports and synthesize the work of this task force. Some of the more detailed work group reports may be prepared for separate publication.
Health care policymakers in the Clinton administration started out with several key assumptions directly pertinent to the issues under consideration: 1) the United States has too many medical specialists. In spite of conclusions reached by COGME (the Commission on Graduate Medical Education—that psychiatrists, especially child psychiatrists, were in undersupply—the policymakers were silent on whether psychiatry was among those specialties with an oversupply (1); 2) the best way to reduce the number of specialists is to regulate and control the size and number of residency programs in each specialty; and 3) the best way to determine which residency programs should continue to train residents, and how many each one should train, would be via a central national mechanism based largely on the quality of training programs. Such central planning for the training of physicians exists in various forms in Canada (by province) and in some European countries. Early Clinton health care reformers envisioned making determinations about program size and quality either via the existing RRCs, which operate under the aegis of the Accreditation Council on Graduate Medical Education (ACGME), or via a federal commission or agency. In either instance, the ultimate regulating body was expected to make its determinations based on the quality of training. The rationale for suggesting that the RRCs should fill this role resided in the fact that for many years the RRCs have evaluated the quality of residency programs as the basis for providing both accreditation and detailed feedback to institutions. Although the RRCs declined to serve in any regulatory role on allocation of residency positions, the ACGME staff did analyze what accepting such responsibilities might entail (Accreditation Council on Graduate Medical Education: The role of the ACGME in relationship to national physician workforce planning: an ACGME discussion draft [in-house document], 1993). Fully aware of numerous methodological and legal problems that finally led to their refusal to participate, ACGME staff sketched out scenarios in which the RRCs might either conduct more detailed and frequent program reviews or use their existing sources of information to render judgments on the quality of residency programs. These judgments on quality might be based, for example, on the type and number of deficiencies noted during RRC site visits.
To what extent can we accept the three aforementioned assumptions as valid? First, considerable debate exists as to how many psychiatrists the nation needs. Workforce issues are considered in the following section. Furthermore, regulating the number of psychiatrists without concurrently regulating the skyrocketing training of nonpsychiatrist mental health professionals—psychologists, psychiatric social workers, and a variety of Master's-level therapists—seems misguided.
The second assumption states that the best way to regulate the number and location of trainees is through nationally regulated top-down control. This assumption is also questionable, since many other reasonable regulatory methods, considered in the section on options, are easily conceived.
The third assumption, that a single central arbiter of what constitutes quality training is truly the best way to authorize the ongoing existence and size of programs, is also problematic. Much of what follows concerns the difficulties encountered in attempts to define and determine quality.
Moreover, beyond quality per se, many other political factors will, and should, inevitably enter decision-making processes to determine the size and distribution of psychiatric residency programs. These include the opinions of local influence makers; the service needs of public institutions; the pride, status, and reputations of institutions currently hosting training programs; and many other factors that are not exactly coincident with quality.
To summarize, each key assumption underlying the push to reduce and allocate the number of residents based on how a single central authority determined the "quality" of training programs seemed open to serious question. Therefore, the task force's decision to envision possible mechanisms for assessing program quality and for making allocations was not to be construed as endorsing these assumptions. However, should federal planners decide to regulate house staff numbers and locations at some point in the future, we hope that our discussions will provide input from the field and that our recommendations might be preferable to harmful alternatives.
A high point prevalence of significant psychiatric disorders, mostly major depression, anxiety disorders, substance abuse and dependence, cognitive impairment, and psychoses (2,3), affects roughly 15.4% of the population. The fact that only about 10% of these patients have seen a "specialty" mental health provider (4) supports the view that a large psychiatric workforce is needed for the country. Additionally, a good case can be made that psychiatrists rather than other providers are needed to treat this population based on several facts: 1) nonphysician mental health providers lack the training to even recognize the possible presence of medical factors and 2) nonpsychiatric physicians on medical units miss a large fraction of cases with clear-cut delirium (5,6), and more often than not fail to diagnose and fail to treat these disorders (7,8).
Nevertheless, these compelling reasons do not usually convince nonpsychiatrist health planners and policymakers that we need more psychiatrists. They have other models in mind. The "workforce" debate in psychiatry, as to whether we are training too many psychiatrists or need more, extends back for more than a decade (9). A decade ago, it was already clear that the proliferation of managed care models of professional staffing and of nonphysician mental health providers might reduce the "demand" for psychiatrists (i.e., what the country would pay for), even though the "needs" might be great. Based on the prevalence of psychiatric disorders and the task force's view of the proper role of psychiatrists in treating these disorders, we can and do support the view of COGME that the nation needs more psychiatrists than are now available (1). However, we are also aware that an alternative argument may be made, for example, based on traditional planning figures of 10 psychiatrists per 100,000 population, that the current number of 12—16 psychiatrists per 100,000 population is already too large. Furthermore, workforce models developed by some health maintenance organizations (HMOs) and managed care companies suggest that their systems will require only 3 to 5 psychiatrists per 100,000 population, in which case we already have far too many psychiatrists (10—12). To illustrate, an influential "population-based benchmark" analysis, offered as an alternative to needs-based or demand-based planning for all specialties, suggested that the current number of psychiatrists, about 13.9/100,000, far exceeds the numbers with which HMOs are currently managing (about 4.19/100,000). Furthermore, the current availability of psychiatrists far exceeds those needed, according to the numbers with which populations of two benchmark cities, Wichita, Kansas, and Minnesota, Minneapolis, are managing (1.93/100,000 and 1.41/100,000, respectively) (13). An analysis by Dr. James Scully correctly pointed out that determining the number of psychiatrists needed requires a set of initial assumptions as to what conditions are to be treated by whom, and which health care workers should be tasked with which aspects of psychiatric care (14,15). These assumptions vary widely among various model builders. Finally, a recent analysis by Faulkner and Goldman (16) offered a five-step method for estimating the need for psychiatric manpower. These authors concluded that depending on the assumptions used in each step, the total psychiatric manpower requirements for the U.S. population ranges from 2,989 to 358,696 full-time-equivalent psychiatrists.
Furthermore, simply looking at the number of psychiatrists available to the population of large regions or cities does not address their geographic or functional maldistribution. While policy experts believe that the marketplace may adequately regulate the distribution of specialist vs. primary care effort, these marketplace factors have not been sufficient to correct maldistribution (17,18). State-by-state planning to contend with these problems may be inadequate, and some authorities believe that only a "top-down" federal approach is likely to impact maldistribution (19).
Geographic maldistribution particularly affects rural and inner-city areas. Concerns about functional maldistribution have also been focused on the proper distribution of psychiatrists' efforts devoted to severe, debilitating mental illness vs. other types of psychiatric disorders. Studies in the United States and elsewhere have identified several important issues governing psychiatrists' practice locations (e.g., rural or inner-city vs. other locations) and practice patterns (e.g., private practice vs. public-sector practice) (20—22). Like other physicians, psychiatrists are more likely to settle where professional, social, and economic incentives exist for desirable types of practices and life-styles. Psychiatrists are more likely to settle where the population has adequate insurance coverage for psychiatric disorders via private insurance or federal insurance programs, has a higher degree of discretionary income, and has some degree of sophistication about the value of using psychiatric services. Their distribution resembles that of attorneys (23). International medical graduates (IMGs) are more likely to take positions in the public sector and to remain in those positions rather than shift to private practices (24). In other countries, useful strategies that have attracted psychiatrists to work in poorly served areas have included improved terms and conditions of employment and facilities, creation of a critical mass of psychiatrists in major nonmetropolitan areas, appointment of psychiatric leaders, increasing training to assure competence, community support, and academic appointments, among others (21,22). Some of these strategies have already helped in various areas of the United States as well. Regardless of how psychiatrists are ultimately encouraged to work in underserved areas, the point is that no fully comprehensive workforce policy can ignore grappling with maldistribution issues.
The debates are far from over. The American Psychiatric Association (APA) is considering the possibility of sponsoring a major workforce conference to consider the issue in depth (25), and the Group for the Advancement of Psychiatry also plans to take this issue under consideration. So, we may currently have too few or too many psychiatrists; therefore, we should be training more or fewer psychiatric residents.
Defining quality in residency programs seems a daunting task, involving a notion that appears to be nebulous, abstract, and subjective. Fortunately, new tools, originally developed for industrial purposes, have recently been applied to defining quality of health care (26—28).
Two key concepts are needed to understand quality in a practical as opposed to a philosophical context. The first key concept is that the quality of a particular product depends on meeting or exceeding the needs and reasonable expectations of the persons receiving or using the product. These people are the product's customers. Since most products have many possible users, each product may consequently be associated with many needs and expectations. This leads to the second key concept, that quality is not unidimensional for most products. Rather, quality is multidimensional, with each dimension representing some aspect of the product related to a particular customer's needs or expectations (29,30).
In residency programs, quality should be considered multidimensionally, in relation to those aspects most important to the training programs' customers. The weight or value attached to various dimensions may vary according to particular needs. Given the many different types of persons impacted by training programs, several distinct groups of "customers" may be considered as "most important": trainees (including applicants to the program as well as current residents), patients and family members receiving care from residents and program graduates, faculty who rely on residents for clinical services and as potential future faculty, third parties who pay for clinical services and to support training, and professional associations and regulatory agencies concerned with the profession's quality. Clearly, additional groups of customers exist as well, but the problem of defining dimensions of quality quickly becomes intractable as the list of customers grows.
Considering the varying needs and expectations of the customers just listed, quality can be rated along several apparent dimensions, including the extent to which 1) salient clinical knowledge, skills, and attitudes are acquired by trainees (in the aggregate, these elements constitute those dimensions usually considered to represent "quality" training); 2) programs meet important societal needs, for example, trainees serve underserved populations; 3) quality trainees are produced efficiently and effectively (economically); 4) graduates practice ethically, participate in professional organizations, and advance the field of psychiatry through research, education, and public service; and 5) trainees are selected and treated fairly, ethically, and without discrimination. Many other dimensions can be imagined, and the reader is invited to do so.
Operationalizing such dimensions requires selecting reliable and valid measures of the relevant dimensions. Some measures now exist, but most will require development. Converting data obtained on such measures into information usually requires that subjective values be placed on certain outcomes and that the relative importance of measures will be weighed against one another. For example, such an evaluation might assess the trade-offs involved in the fact that some programs preferentially produce public-sector psychiatrists, whereas others produce greater numbers of researchers. Costs for producing public-service psychiatrists in one program might be compared with the costs of producing them in another program. However, these costs should not be simplistically compared with the costs of preparing research psychiatrists, since what's involved is so different.
How are these competing values and agendas to be balanced? It is clear that any modeling system will have to include a large number of arbitrary trade-offs, all necessarily motivated by personal value preferences, economic incentives, local and regional political pressures, and other non-absolutes. Even if measures to reliably and validly assess the most salient dimensions are available and arbitrary relative values are set for each of the characteristics to be rated, decisions as to which programs to keep, shrink, and cut will necessarily be subjective. As certain factors play a role in all such decision-making processes, such as grant distribution, the final determinations will still have to rely on some intuitive use of "fuzzy logic" (31)—fuzzy approximations based on vectoring a multitude of crude estimates. Final decisions as to how these competing issues play out depend in large measure on who is at the table, that is, a political process.
Existing methods for assessing the quality of residency programs address only some of the customer-relevant dimensions mentioned earlier. Checklists provided to RRC site visitors cover many dimensions on which programs may be reasonably assessed, including institutional and organizational resources, faculty qualifications and responsibilities, details on clinical and didactic research and other aspects of the course of study, responsibilities and supervision of the residents, duty hours, how residents and faculty are evaluated, and other programmatic aspects. However, many important dimensions are not included in these assessment checklists, and even many of those characteristics that are assessed are not necessarily measured with satisfactory degrees of validity, reliability, or precision. Among other faults and limitations, quick-and-dirty methods that would rely primarily on using RRC assessment checklists suffer from the following flaws: in spite of efforts to improve the assessment method with standardized checklists and orientation programs for site visitors, whether true or not, perceptions in the field suggest that a great deal of arbitrary difference still exists among assessors. Some are specialist site visitors and others are not; some are more lenient and understanding than others in their appraisal of what programs are trying and able to accomplish; and some are rigid sticklers for administrative requirements for record keeping. Others are more tolerant. We are unaware of any studies of interrater reliability with regard to RRC site visits. Nor are we aware of any published criteria by which the RRC determines the way in which the number and nature of problems noted on its site visits are ultimately evaluated, scored, and summated to result in full accreditation, limited accreditation with early review, probation, or, as rarely happens, loss of accreditation. Clearly, some deficiencies or their combination are far more important and grievous than others to the RRC, but this has not been made explicit to the field.
The methods for measuring quality used by the American Board of Psychiatry and Neurology (ABPN) and by the Psychiatric Residency In-Service Training Examination (PRITE) are far more limited in scope than those of the RRC. Although their cognitive examinations are intended to test a candidate's knowledge in various important areas of psychiatry, and although the examinations appear to have at least some face validity in this regard, the extent to which these examinations measure knowledge that actually means something in day-to-day clinical practice is unknown. Most authorities would agree, on the basis of common sense, that it is better for psychiatrists to demonstrate that they have mastered the material tested on these examinations than that they lack such knowledge. With regard to the measurement of clinical proficiencies, most would agree that the live patient and videotape portions of the ABPN examinations are at best extremely crude in their capacity to assess many important aspects of psychiatric practice, including interpersonal skills, clinical judgment, therapeutic process, and ethical behavior.
During the task force's initial meetings, it became quickly evident that many AADPRT members greatly distrusted any plan in which a single central agency might be authorized to make all decisions about the number and allocation of residency positions. Although the need to reduce the number of house staff nationally is clearly not settled, we realized that—whether or not all agree—we had to consider the possibilities that fewer residents are needed and that fewer psychiatric residents may at some point be authorized for the nation as a whole. Accordingly, the Options Work Group reviewed existing models for house staff distribution and identified several possible ways other than central top-down fiat, whereby planning for residency program size and location might occur.
1. Models From Canada, England, and France
In England, the National Health Service determines the number of specialists and has historically used a funding mechanism tied directly to hospitals, using the registrar or house physician model, bypassing the universities entirely. Currently, central and regional councils for postgraduate medical education exist that recommend numbers for consultant positions and quotas for trainees by specialty, a heavily top-down approach (32,33).
In Canada, universities administer and regulate postgraduate education, and most postgraduate training takes place within university-affiliated teaching hospitals. However, provinces, collaborating with universities, provide the bulk of trainee stipends directly to teaching hospitals, thereby retaining some control over the distribution of specialists and their locations (34,35).
Since 1982, France has had a national board that determines the numbers of specialist training slots and their locations. When making decisions, this board considers the advice of regional, technical, academic, and national commissions, as well as economic and distribution factors. At first this system generated considerable discord, including a house staff strike, but the situation has since stabilized considerably (36).
2. Voluntary Reductions in Size
Before heavy handed top-down regulation is imposed, it seems reasonable to assess the extent to which voluntary reductions can achieve the numbers that health reformers believe are indicated. In fact, many residency programs have already "voluntarily" downsized in response to economic pressures and because the number of American medical school graduates entering psychiatric training has also dropped dramatically in the past few years (37).
Natural downsizing of psychiatric training programs is likely to occur as remnants of the health care reform process continue to push the so-called 110%/50% solution, limiting the number of residency spots to 110% of the number of graduating American medical students, of whom 50% would be obliged to train in primary care residencies. This proposal has recently been sanctioned by a large number of influential medical education associations, spearheaded by the Association of American Medical Colleges (38). Since large numbers of postgraduate year-1 positions in psychiatry are consistently and currently filled by IMGs (39), limiting the extent to which IMGs could fill open positions would immediately drop the number of psychiatric residents sharply. Recently, several states, including California and Arizona, have greatly reduced the ability of IMGs to obtain residency positions within their borders.
3. Merging and Combining Programs
The number and size of programs may be reduced by the merging of several programs in circumscribed geographic regions, combining their faculties and resources, and selecting sites and experiences that provide their reduced number of house staff the best training experiences. Care must be taken to minimize fragmentation of training experiences because of geographically distant sites or lack of coherence in training philosophies. Such merging has already occurred voluntarily in some locales because of economic pressures and with encouragement from the RRCs for weak programs to affiliate with nearby university programs (37). If necessary, future mergers could be further encouraged or even mandated by regional or national authorities. Planning would require local faculties to negotiate with one another, and might require the appointment of local arbiters to make ultimate decisions in the event of negotiations impasses.
4. Across-the-Board or Chance (Lottery) Reductions
If for political reasons, "fairness" should come to play a larger role than quality in determining how to distribute a reduced number of trainees, across-the-board reductions might be instituted. Such reductions might either be absolute (e.g., each program required to reduce by the same number, with a minimum lower limit set at the critical mass required for accreditation) or proportional, with larger programs losing more residents than smaller ones. If not every program needed to downsize to meet a national target, a lottery might be established. Such across-the-board cuts leave aside all pretensions of being based on quality.
5. Institutionally Decided "Rightsizing" of Training Programs
This option, already occurring at many centers, proposes that program size would be decided at the level of the local institutions (e.g., medical schools and medical centers) (37). In this scenario, institutions would qualify for federal house staff funding only if they met certain conditions, for example, training the right proportion of primary care vs. specialty house staff. Deciding how many psychiatric house staff vis á vis other specialty house staff to train in a given institution would be locally decided based on a variety of competing values and concerns. Review mechanisms would be necessary to assure that institutions did not continue to train larger numbers of specialty residents with nonfederal funds.
6. Geographically/Regionally Balanced Reductions
In this scenario, some central authority would determine the number of residents to be trained, and then distribute funding and/or numbers on some equitable basis to states or regions, for example, on a straight number-per-population ratio basis. The assigned funding and/or numbers could be tied to specific specialty type, or decisions about distribution among specialties could be made in each region according to local needs and politics. The states or regions might then decide where the training should take place. The VA has recently charged its several national regions to use this mechanism to reduce specialty training relative to primary care training. Various specialties have been given different weights in the formulas to be used. Primary care, Category I, is most highly protected. Geriatrics and psychiatry fall into Category II. Many surgical subspecialties are in higher categories.
If federal funds are distributed specifically for psychiatric training and tied to geographical areas, for example, underserved communities, the history of the Community Mental Health Movement suggests that some regions may elect to not receive these dollars. Another possible problem is a mismatch between resources and geographic attractiveness. In Great Britain, for example, when training dollars were tied to positions in undesirable locations, some positions remained empty, even though several hundred physicians were unemployed at the time (33).
If geographic/regional distribution mechanisms are used, we strongly advocate for specific numbers of psychiatric house staff and attached dollars, since we can anticipate that psychiatry will often fare poorly in locally fought battles over the distribution of stipends among nonprimary care specialties. Psychiatry will fare particularly poorly in these discussions if the argument made by the other specialties focuses on the economic advantages of having more residents in procedure-oriented fields. The VA's recent positive weighting of psychiatry has been encouraging in this regard.
7. Ranking Programs by Explicit Criteria of "Quality"
As described earlier, better measures are needed than now exist for many salient dimensions of quality, and virtually none have been sufficiently tested for their predictive value with respect to desired outcomes.
In addition to RRC checklists, additional measures suggested to assess program quality include the percentages of graduates obtaining ABPN certification on their first or second try; or achieving recognition for excellence via an APA fellowship, leadership positions, academic rank, publications, etc.; or having negative outcomes such as ethical violations or malpractice claims.
However, while these sorts of measures might seem useful, the lag time between one's period of residency training and these outcomes is often significant. Since today's training programs often change rapidly with respect to chairmen, training directors, faculty, resources, and other key features, it makes little sense to judge today's training programs on the basis of graduate or program characteristics of even a few years ago.
One element in quality assessment might begin with instruments based on selected RRC requirements. Assuming that consensus could be reached on a large number, say 15—20, of scalable dimensions rated from minimal to optimal, programs could be rated on the basis of total scores across dimensions, the number of optimal standards achieved, or some other explicit basis.
If residency applicants are limited in number but are free to choose any program based on whichever factors they consider relevant, the "marketplace" will allocate residents to programs—essentially an extension of the current matching plans. Programs that are attractive, whether because of perceived educational quality, desirable location, faculty reputation, salary and benefits, or any other reasons, will attract applicants. Assuming that the number of applicants is limited, for example, to the number of U.S. graduates plus 10%, and that some national mechanism is created to specify or designate which students or IMGs are eligible to apply, reductions in size or number of programs will occur quite readily. Indeed, were it not for the increase in IMGs in psychiatry over the past decade, the marketplace option would already be operating. With fewer American applicants and a severely limited number of IMGs, programs not matching would wither and close when they fell below critical mass.
This option shifts the emphasis from identifying program quality to designating who may apply (or be funded) for residency. This process then relies upon the perceptions of residency applicants to determine which are the "quality" programs. If the marketplace were to be kept as the sole determinant of house staff distribution, precautions would have to be instituted to assure that a program's popularity was based on the excellence of legitimate training opportunities rather than on special deals created to attract applicants, such as massive medical school loan repayment as part of incentive programs.
If there were more qualified applicants than positions, mechanisms already working in the matching plans, by which American graduates and well-qualified IMGs compete head to head against one another, would decide who would be accepted.
Other marketplace forces can be expected to influence the size and distribution of psychiatric residency programs. Since most residency graduates locate near where they train, we envision that training programs will receive frequent feedback on the local undersupply or oversupply of psychiatrists from various interested parties such as recent graduates, psychiatric societies, public and private-practice employers, consumer representatives, and other local stakeholders.
Recent trends in positions offered and filled through the match suggest that a growing number of programs are falling toward or below the RRC-defined "critical mass" of three residents per year of training. Sixty-five programs in 28 states filled two positions or fewer in the 1994 match (14). This pattern may obviate the need for some of the cuts if the RRC enforces its critical mass requirements. Additionally, reductions caused by any of the options discussed may result in more programs dropping below the critical-mass threshold. If such programs are forced to close, pressures for cuts in all other programs may be reduced.
Most of the options do not address a key "open-system" element in the residency program environment—change. The task force has explored ways to structure the process of cutting residency positions based on the current situation of economic and political pressures and regulatory forces. However, many of these options, particularly those based on centralized mechanisms, may not be sufficiently flexible or responsive to the rapid year-to-year changes that often take place within programs related to faculty, leadership, facilities, funding, and many other key ingredients of the training environment.
We found many problems with arguments and proposals based on purely top-down systems for regulating the number and location of trainees, deciding on the optimal size of the psychiatrist workforce, and making all of the inherently arbitrary decisions about the assessment of quality.
Accordingly, we remain skeptical about any sort of single, central, top-down system that would determine the number of residents for the entire nation, where they should be trained, and how many residents each program should be allocated based on relatively simple definitions of quality in psychiatric training. We also believe that pressures to regulate the psychiatric workforce suggest that parallel considerations are needed for regulating the workforce of other mental health professions. Too many differences exist regarding regional workforce needs; physician migration patterns; services; social, cultural, and political forces; and other important influences to suggest that a single system based on RRC criteria would capably consider all of these realities for every locale. Although RRC criteria have something valuable to offer, we believe that many important aspects of quality, as defined by important stakeholders, including assessments of how high-quality programs may differ significantly from one another, cannot be measured solely by RRC criteria.
After due consideration, our task force supports the following proposals.
If nationally mandated rationing of house staff positions should ever come to pass, we suggest that such rationing should be decided on the following basis.
First, a system of regional councils for psychiatric house staff training should be established, based in regional health authorities. Discussions on how to establish and manage these councils and related issues are beyond the scope of this article.
Second, the available number of psychiatric house staff positions should be distributed among these regional councils on a proportional population basis.
Third, these regional councils should decide how available training slots should be distributed within the region, based on input from all stakeholders in the region, including academic departments, professional societies, and consumer groups. Steps should be taken to see that the proper variety of psychiatrists are trained in regionally appropriate sites based on quality and location and that local training plans also take into account psychiatric graduates' in-migration from and out-migration to other regions.
Fourth, reductions in program size, if necessary, should begin with voluntary reductions and encouragement for the merging and combining of programs, and through natural attrition based on results of the match.
Fifth, in the event that house staff numbers are limited to a fixed number based on American medical graduates plus a set percentage of IMGs, American and IMG applicants should all participate "head to head" in appropriate matching programs.
Sixth, and finally, funding should be attached to the training positions from a national or regional postgraduate medical education funding mechanism so that 1) psychiatric training does not suffer in local resource battles relative to other medical specialty training and 2) residency training does not have to continue to be as heavily financially dependent on service systems as is now the case.