A rush of deadlines have cramped my blogging over the last few weeks, including one for the AALS Workshop on the Ratings forum. My co-author, Andy Morriss, was one of the panelists.
The responses to the various rankings panels and breakout sessions elicited lots of frustration and high horse moralizing. As someone who has spent a fair amount of time examining rankings from an empirical perspective, I have come to one clear conclusion: normative arguments, no matter how cogent and persuasive as a matter of principle, are simply irrelevant. Rankings have whipsawed the legal academy because of institutional self-interest and a collective action problem.
The chart below, which is from our Measuring Outcomes paper, is a strong piece of evidence for how we (the legal academy) have lost our way. In 1997, U.S. News ranking added bar passage to its placement methodology. This coincided with a movement toward higher bar cut scores. (See Gary Rosin's Unpacking the Bar for the implications of this trend.) Bar results have the advantage of being verifiable. In contrast, that same year, U.S. News dropped the salary component from its methodology because of evidence that schools were routinely inflating their data.
Andy and I had anecdotal evidence that schools responded to this pressure by increasing 1L attrition--i.e., imposing a strict curve that guarantees that a fixed percentage of students will flunk out. Because 1L grades are strongly correlated with bar passage scores, this policy could boost bar passage. According to data posted on the ABA website, between 1997 and 2004, law school attrition was up 20%. But when the numbers were disaggregated, the following pattern emerged.
Obviously, 1L attrition has gone up. Fortunately, since 1997, the annual ABA-LSAC Official Guide to Law Schools has published uniform data in a uniform format for each law school, including a breakdown of attrition. (These changes were a response to complaints by U.S. News that law schools were lying to them.)
Andy and I gathered all the attrition data from the 1998 and 2005 editions (published in 1997 and 2004) and ran some paired-sample t-tests to determine whether the increases were statistically significant. The table below, which is weighted by class size, summarizes our results. (Note, to be in the table, a school had to be ranked in 1997. So the results are not affected by the rush of new law schools.)
The change in academic attrition is statistically significant at p < .015. So the Wigmore strategy appears to be returning. Yet, the change in "other attrition" is up even more. What is driving these results? Andy and I think it is the transfer student gaming strategy. In a nutshell, Elite school A or Tier 1 school B shrinks its 1L class, gets an LSAT boost, and makes up the revenue by admitting more transfer students, whose credentials are irrelevant for U.S. News purposes.
As was discussed at the Rankings forum, many law school now actively solicit transfer students through various direct mailings. I doubt this was happening much, if at all, ten years ago. The Law School Survey of Student Engagement documents that transfer students tend to be more socially isolated than their peers who began as 1Ls.
Andy and I recommend that ALL outcome/placement data be released into the public domain: number of firms interviewing on campus; employment rates by practice setting; salaries by practice setting; MBE scores, controlling for entering credentials. This would at least force law schools to compete on value-added to students, rather than gaming. I wonder if the ABA reads this blog?
Thanks, Nancy. I appreciate your insights. bh.
Posted by: William Henderson | 10 January 2007 at 09:29 AM
Bravo for your comments here, Bill, and kudos to you and Andy for a superb presentation at the workshop. I agree with you that we need to require verifiable data that's more difficult to game. (See my post at http://nancyrapoport.blogspot.com/2007/01/this-years-aals-was-different.html for how I'd like to see the data displayed.) I will never, never understand why law schools are more concerned with irrelevant and meaningless distinctions among schools than they are in doing the right things for the right reasons. Erica Moeser was right when she suggested that the USNWR rankings would be much less distorting if they were presented graphically, showing how trivial the distinctions are within certain groups of school.
I'm looking forward to the future articles that you and Andy do on this topic.
Posted by: Nancy Rapoport | 08 January 2007 at 04:40 PM