Today I am in California for a hearing before the Board of Governors, which is the elected body that oversees the state bar exam. For the past 18 months, I have been part of a research team (with Richard Sander, Vik Amar, Doug Williams, and Stephen Klein) seeking California data to test a possible mismatch effect in law schools. The substance of the hearing today (see agenda here) was to determine whether we would be permitted to obtain regression results, via the Bar's own longtime psychometrician, from the Bar's rich archive of historical data.
After hearing approximately 50 minutes of prepared testimony from those for and against our study, the Board voted to deny our data request. Based on the tenor of the meeting, which was set by a well-organized opposition led by Cheryl Harris (UCLA Law) and Michele Landis Dauber (Stanford Law), the vote turned on legal uncertainties surrounding candidate confidentiality and consent.
Acceding to these concerns has, in my opinion,
three major flaws: (a) 0ur research team would only have access to
regression tables with results aggregated among groups of schools, so
no individually identifiable information would ever be released to
researchers or the public; (b) we would, at all times, be subject to
university IRB protocols; and (c) the broad construction given to
consent in this context suggests that much of the research sponsored by
the California Bar over the last 20 years has been unlawful; moreover,
this decision casts doubt on the scope of any future research by the
California Bar.
[Note: Richard Lempert (Michigan Law and formerly chair of the NSF Law & Social Science Program), wrote this memo on the IRB issue, which he graciously forwarded to our research team in advance of the hearing. Rick's assessment, however, is quite different from my own experience in obtaining IRB approval at three different institutions for projects which raised, in my opinion, much more significant human subjects issues. Regardless, no research would occur without IRB approval; that should be obvious to all parties involved.]
Of course, the Board of Governors is not an adjudicative body -- rather, in this
case, it is the regulatory unit that would be dragged into court if it
approved our proposal. Thus, on one level, I can understand the institutional
reasons for taking a cautious approach. (For a snapshot of the public debate leading up to this decision, see this National Law Journal op-ed by Cheryl Harris and Walter Allen, and our NLJ response; in the spirit of transparency, all documents related to our projects are posted on this website.)
Although I am disappointed with the outcome, I am proud of the testimony we put on today. The basic theory underlying the mismatch effect is that recipients of large preferences (regardless of race) tend to learn less because the classroom instruction is pitched toward a modal student with significantly higher entering credentials. Honest researchers can argue over the methodology and inferences of Sander's Systemic Analysis study. But existing data provide little comfort that our current system is working well. Minority students are disproportionately clustered at the bottom of their class. Since LGPA is the single best predictor of bar passage, it is not surprising that these same students struggle on the bar examination. Sander's first choice/second choice analysis (in his 2005 Stan. L. Rev. response essay) suggests that law school performance and bar passage prospects actually increase when a student opts for a less elite law school. In my opinion, if this theory is wrong, it is imperative that it be debunked empirically rather than through efforts to withhold data.
As a lifelong Democrat and ardent supporter of racial diversity, I don't need to apologize for raising these issues. As a said in my prepared remarks, it does not follow that evidence of a mismatch effect requires the dismantling of racial preferences -- indeed, in my opinion, this would be a very bad idea.
But if we fail to diagnosis factors that contribute to low minority bar passage, we have no basis to formulate effective policy or educational strategies. Regardless of which way the data cut, our study would have guaranteed one of two favorable outcomes:
- Using the more refined California Bar exam (i.e., a continuous dependent variable rather than pass/fail and schools put in analytically useful clusters, unlike the LSAC-BPS data), the mismatch theory would have little or no empirical support. So a contentious academic theory would be put to bed, at least in the law school context.
- A mismatch effect would be supported, which would pressure law schools to take concrete steps to help current or prospective students. These might include: (a) disclosures that reveal bar passage prospects for past students with similar entering credentials; (b) creation of rigorous academic support programs (such as this one) that increase the bar passage for students in the bottom 1/2 of the class; and (c) identify curricular and teaching strategies that produce higher bar exam scores.
As legal educators, we should want more than the status quo.
Recent Comments