The following is story of what I believe to be the first ranking of all U.S. Law Schools. (If I am mistaken, please email me additional facts and I will post them.) As a service to the law school community, I have scanned and linked to the relevant articles. Otherwise, they would be hidden away in the stacks of most law school libraries. I hope readers find this account interesting and enjoyable.
The story begins with a magazine. In the early 1970s, the ABA Section on Legal Education and Admission to the Bar attempted to elicit a dialogue between the legal academy and practitioners by starting a periodical called Learning & the Law. Its editor was a law professor named Charles Kelso (now emeritus at McGeorge), a person of prodigious energy who served on numerous ABA inspection teams during the 1960s and '70s. Before ending its publication, Learning & the Law put out four terrific, interesting volumes (1974-77) that tapped many of the nation's leading academics and law school deans to write about of issues affecting law, society, pedagogy, legal education and the practicing bar.
In the summer 1975 , Charles Kelso drew upon data collected by the ABA to publish a rating of law school resources--e.g., faculty size, student/faculty ratio, volumes in library, etc. See Adding Up The Law Schools: A Tabulation and Rating of their Resources. The idea for the Resource Index emanated from Kelso's observation that schools with part-time programs tended to have fewer resources per capita and, in general, focused on training competent practitioners rather producing scholarship, championing social causes, or providing an advanced liberal education. The compilation of such a list, reasoned Kelso, could help ABA inspection teams evaluate resources vis-a-vis law schools' stated social and educational missions.
The first paragraph of the article explicitly stated, "[This list] is not a quality rating of law schools" (emphasis in original). But it didn't matter. The article listed all U.S. law schools in descending order of resources. Further, each school was placed in one of seven groups (you know, like law school tiers) and assigned a resource grade of A, B, or C. Groups 1 and 2 were designated an "A"; Groups 3, 4, and 5 were given a "B"; and Groups 6 and 7 were given an "C". Kelso also noted that a recent listing of top law schools [based on survey data from law school deans from Change magazine, Winter 1974-75] closely tracked the top of the Resource Index.
In the weeks that followed, Learning & the Law was besieged with letters, many from law school deans, complaining about the inaccuracies of Kelso's methodology. The Fall 1975 issue published several critical letters and a comprehensive defense by Kelso. See In Defense of the Mathematics of Law School Evaluation. For a summary of that provocative exchange, you'll have to read past the jump.
With the benefit of hindsight, two unmistakable impressions emerge from the Fall 1975 issue of Learning & the Law: (1) Charles Kelso honestly believed that careful evaluation of law school resources and missions could improve the quality and cost of legal education for all relevant constituencies; and (2) the collection and presentation of that data in rank order was akin to letting the genie out of the bottle--no matter what Kelso said about the use and meaning of the "Resource Index", it was going to be perceived by the law school community as a ranking of quality.
The beginning of Kelso's Defense essay reads as follows:
The dean of a Midwestern law school has written to suggest that the record be set straight on what inference can and cannot properly be drawn from the Resource Index published in the Summer 1975 issue of Learning. This dean and several others have predicted that the Index will be misused to give the impression that the ABA is now in the business of ranking law schools and that the results of the Index methodology may be misinterpreted to imply a single-dimensioned "quality" ranking for law schools. ...
It should be restated that the Index "is not a quality rating of the law schools." The position of a school of the Resource Index is not an assessment of the relative academic or professional benefits that particular students may obtain by attending particular schools. The Index does not speak to research output or to the significance in the intellectual or legal life of the nation. ... Further, the ranking is only an approximation of relative position.
An "approximation" indeed. And because a definitive list of relative position did not exist, the Resource Index had the potential to serve as a single, gigantic anchor.
The responses by the deans were particularly telling. They did not object so much to the aggregation of the data and the distortions inherent in the quantitative process. Cf. Sauder & Lancaster, Do Rankings Matter? The Effects of U.S. News & World Report Rankings on the Admissions Process of Law Schools, 40 Law & Soc'y Rev. 105 (2006). Rather, the comments displayed a dissatisfaction (or occasional glee) with the results. In other words, "if the methodology were corrected, the Index would properly reflect [School X's] true position."
For example, Dean Norman Redlich of NYU Law, which has placed #8 in the Change magazine reputational ranking, complained that the Kelso methodology failed to take into account his schools 86 adjuncts, which enabled the school to provide a rich array of LLM specialty programs. Redlich also quibbled with accounting for per capita library resources. Redlich concluded, "Adjusting Professor Kelso's rating accurately to reflect just the student/faculty ratio would lift New York University to his Group 1 ("A"), which is his highest category."
Dean Frederick M. Hart of University of New Mexico complained that the Kelso methodology downgraded smaller facilities when, in fact, these attributes were New Mexico's greatest strength. Dean Hart thus wrote (we assume partially tongue-in-cheek):
To downgrade us because of our small size is similar to penalizing the Dallas Cowboys because of "Too Tall Jones's " height. His height makes him a better football player and Dallas a better team; our size makes us a better law school. ... [When the Kelso methodology is corrected, it] clearly demonstrates that we are the best law school in the country.
Professor Peyton Neal, who was on the faculty at another small school, Washington & Law, also complained about a probable size bias (but then why did Georgetown and NYU fare so poorly?):
I seriously question the fact that your analysis appears to place a premium on larger schools. While I believe that it may be probable that some of the larger schools are better in many respects, I question whether some of our smaller schools may indeed be in a superior position when all applicable statistics are considered realistically.
My favorite letter was written by Dean Monroe Freedman of Hofstra University School of Law, which was on the winning end of the Index game:
Hofstra Law School has been inundated by letters and telephone calls congratulating us, ever since the publication of Professor Kelso's Resource Index for rating law schools, published by the American Bar Association and Learning and the Law. On the Kelso Index, Hofstra is ahead of such schools as Georgetown and Boston University, tied with Boston College and George Washington, and only three points behind N.Y.U. ...
[But other important criteria has been omitted,] and Hofstra Law School, in particular, would show up even closer to the top of the list if these factors were appropriately taken into account.
Perhaps Kelso could argue that all of these academics were drawing improper inferences--that the real purpose of the Index was to create a means to diversify and ultimately improve legal education. But his Defense essay also cited numerous other data showing that a favorable Index score was strongly correlated with other desirable attributes, such as range of elective courses, clinical offerings, better placement in larger firms, more sophisticated teaching styles, and better credentialed and more prolific faculties who are more likely to be members of the American Law Institute. An inference of relative quality was therefore irresistible.
Charles Kelso concluded his essay by arguing that improvement in legal education had been stymied by lack of data needed to assess the relationship the law school programs and various measures of law school success. Thus, Kelso reasoned, the Resource Index should be compiled on an ongoing basis along with "an ongoing program of data collection about goals, programs, classroom interactions, exit data, and alumni career and competency information."
The ABA apparently did not share Kelso's optimism or vision. The Resource Index was not reproduced the following year. And by 1977, Learning & the Law ceased publication. Fifteen years later, U.S. News & World Report stumbled upon the concept of aggregating similar information for commercial gain. And as the various labor markets have anchored on these figures, virtually every law school in the country allocates millions of dollars in resources to jockey for position in a competition that is completely unhinged from educational merit and lawyer training.
There is a simple way to battle the influence of U.S. News--the ABA could complete Kelso's vision by collecting data and sponsoring research on the relationship between law school programs and various measures of lawyer success. When prospective students know how their tuition dollars relate to their longer term personal and professional goals, these empirical facts will have greater force than the U.S. News "approximation[s]" of educational quality. Students will vote with their feet.
But do law school really want to compete on the basis of value-added for students? Or do we want to maximize things that matter to us (law school faculty), like scholarly prestige? I think the answer is obvious. Hence, the U.S. News stranglehold on legal education will continue until law schools start going bankrupt. And when we examine the upward trends in law school tuition, that day of reckoning may not be far off for many law schools.
I read the 1975 article which compared law schools back then. As I vaguely recall Northwestern and the U. of Washington did surprisingly well because they had good ratios of profs to students and books to students. I think Leiter's comparisons sometimes use that approach. For example, he tells us endowment per student as do some rankers of colleges.
Posted by: John Rooney | 20 October 2006 at 09:01 AM
he ABA could complete Kelso's vision by collecting data and sponsoring research on the relationship between law school programs and various measures of lawyer success. When prospective students know how their tuition dollars relate to their longer term personal and professional goals, these empirical facts will have greater force than the U.S. News "approximation[s]" of educational quality. Students will vote with their feet.
I think that is very true, and it is what the students are looking for (and think they are getting) when they read rankings.
Posted by: Stephen M (Ethesis) | 18 October 2006 at 07:36 AM
There are many problems with law school rankings. At one time I looked closely at the U.S. News rankings and ALL measures were seriously flawed. (I could send a never published draft article to anyone interested.) Some problems have been corrected, in part in response to comments from me and others because U.S. News has always wanted a quality survey, but many flaws remain and various biases are built into the survey. The two biggest problems in my view are (1) the extent to which the rankings can affect law school behavior in counterproductive ways and become self-fulfilling prophecies and (2) the fact that a valid unidimensional ranking is impossible. A Consumer Reports type ranking system in which relatively similar schools were clustered would be more defensible, though there would be obvious unfairness at cluster borders. It is, ironically, the very impossibility of developing a consensually valid ranking of schools that helps make the U.S. News rankings seem on target. They look right. But what is overlooked is that there are many different rankings that would look right. This wasn't always the case. In the ranking issue that started the series (there was one based on different methods a year or tow before), Harvard was number 5 as I recall, which by itself called the validity of the rankings into question. The next year Harvard had moved to #2, as U.S. News changed its weights and criteria. Indeed, when last I looked (about a decade ago) many changes from year to year were due to changes in the U.S. News criteria and weights rather than to changes in how the schools stood on various measures in relation to each other. Also in those early days if schools did not cooperate by providing U.S. News with information (and more information was confidential then) U.S. News "estimated" their position on these dimensions, often the schools felt at an unrealistically low level, thus penalizing them in their overall ranking and inducing them to provide their true data the next year. (I looked at this and found some egregious examples, but for the most part the estimations, if I remember correctly,did not seem vastly diustorted.) Also from the start schools gamed the system or cheated. The first year the U.S. News was able to get accurate LSAT data through the ABA it reported that something like 30 schools had provided elevated data when they thought U.S. News had no ways of checking. Other kinds of gaming the system also happened. For example, I was told of one school that admitted all their minority students to their evening school sicne rankings were based on day school statistics and then alowed those who wanted to transfer freely to the day division. A final anecdote. I presented the draft paper at one school in U.S. News 5th tier. The school was in my view a school with a number of quality faculty and had a right to be distressed. I did not hear a good word about the survey while at the school. Several months later a new ranking came out and a faculty member sent me a copy of a memo the law school dean had distributed bragging about the school's rise to the fourth tier. What the memo did not say was that U.S. News had eliminated the 5th tier and the 4th tier was at the bottom!
Rankings are not necessarily bad. In my experience at Michigan some of the very best teaching goes on at the business school. Business schools don't care much about the U.S. News ratings because their students are interested in the ratings produced by one of the lead business magazines (e.g. Fortune, Business Week, Forbes - but I don't remember the one). This magazine apparently surveyed students at these schools on the quality of their teaching and weighed this heavily in their rankings. The Michigan B School which typically ranked somewhere in the teens decided to emphasize teaching quality in hiring and promotion and was rewarded a few years later by a number two ranking; students were rewarded every day by the focus on good teaching and efforts made to help faculty become better teachers.
The idea of an ABA or AALS ranking system is not a bad one if (a) the factors weighed are factors that matter to legal education and we want schools to try to give them greater emphasis; (b) there are separate rankings along different non-commensurate relevant dimensions and (c) overall ranking is done by groupings rather than by creating false distinctions between schools that attract largely similar students and have faculties of more or less similar strength.
Rick
Posted by: Rick Lempert | 17 October 2006 at 09:38 PM