Previously:
1. Introduction
2. Are Black / White Disparities in Graduation and the Bar Getting Better, or Worse?
3. A Reprise of the Mismatch Argument
4. First-Choice / Second-Choice Analysis
5. Do Elite Schools Avoid the Mismatch Effect?
6. Grades and the Mismatch Effect
7. Can the Market Fix the Mismatch?
This week, I’ve been aided by
two third-year students at UCLA – Patrick Anderson (who will
coauthor my forthcoming book) and Matthew Butterick (whose independent work on
the legal side of the mismatch was mentioned in Post 7). I’m grateful to
them both.
My
previous posts laid out the story of what is happening to black law
students as a result of
affirmative action. There are grey areas and we need
better data to resolve several issues. But if this were a less
controversial
topic, I think many would agree that affirmative action in law schools
may not be achieving many of its core
goals, and that it is probably producing some extremely serious side
effects. If affirmative action were not an entrenched social
policy, but instead an experimental drug, could it possibly survive an
FDA
review? Not on the existing evidence.
As
I suggested at the beginning of the week (and Bill
Henderson suggested earlier), there has not yet been a complete
critical
engagement with these arguments and the underlying data. Critics have
raised specific questions about my analysis, which in some cases have
led me to new insights and improvements. But no critic of
the mismatch hypothesis has woven the data into a consistent alternative theory.
I have responded to most of the critiques; rather than rehash those, I’d
like to illustrate the problem by examining a recent
critique by Rothstein & Yoon.
Jesse Rothstein and Albert Yoon have produced by
far the strongest response to Systemic
Analysis,
and I know and admire Albert. Rothstein & Yoon
recognize the problem of selection bias, and attempt to
deal with it. They construct models for the data, and they report
evidence of significant mismatch effects, though they think other
factors
might explain their results. All this is good. But Rothstein &
Yoon also commit many of the common critical errors:
Rhetoric. Rothstein & Yoon repeatedly say they
find no evidence of mismatch effects in the top 80% of the credential
distribution of law school applicants. This is misleading. On closer inspection, one finds that
the “top 80%” of all students includes only the top
20% of black students, so their mismatch findings apply to 80% of black
students. Even if they were right that the
top 20% of black students don’t suffer mismatch (but see below), it
trivializes the problem to say it only affects the “bottom
quintile”.
Ignoring relevant data. Given Rothstein & Yoon’s concern with
selection bias, it would seem that a comparison of first- and second-choice
blacks would be called for; no one has identified any meaningful bias in the
first-choice / second-choice data. Moreover, one can split the
first-choice / second-choice sample to compare the elite blacks (who supposedly
do not suffer from mismatch effects) with other blacks. I’ve done this,
and the effects seem indistinguishable – that is, blacks with high
credentials seem to gain as much from going to second-choice
schools as other blacks do. In other words, Rothstein & Yoon don't reconcile their findings with relevant data.
Shrinking “significance”. Law
student outcomes (for all
racial groups) get much better with rising credentials. Relatively few
students of any race with an academic index above 700 fail to graduate
and pass
the bar. But that doesn’t mean the mismatch effect has declined, if by
"mismatch" we mean how much relative learning occurs. For example, it
could easily be that mismatched students with fairly high
credentials are scoring much lower on bar exams than comparable,
non-mismatched
students, but both groups have mean scores far above the passing
threshold. Indeed, mismatch effects in grades (a measure of learning)
are
not different at all for the most elite blacks.
Rothstein and Yoon compound this problem by focusing on “ultimate bar
passage” as a key outcome measure. Since first-time bar failure
rates are much higher than ultimate bar failure rates, looking only at a
bivariate measure of ultimate bar failure throws out most of the relevant
information. Ultimate bar passage is also not a sensible measure theoretically.
The mismatch effect occurs during law school. If a student passes the bar
on his fourth attempt, two years after law school, he may well have partially
offset the mismatch effect by hiring tutors, taking a variety of
bar-preparation courses, and other work aimed at learning what he didn’t
learn in law school. First-time bar attempts are the best measure of what
was actually learned in law school.
Critics, including Rothstein & Yoon, chronically favor measures that narrow
apparent black-white differences or shrink sample sizes, in what looks like a
quest to shrink the gaps to a point where one can proclaim "it's no longer
statistically significant, and therefore not a problem."
Measuring mismatch in the job market. Rothstein & Yoon use BPS data
to see if blacks are hurt in the job market by mismatch effects. This is
commendable, but they miss the obvious test -- they do not compare the relative
impact of school eliteness and law school grades on black earnings. If they
had, Rothstein & Yoon would have found results highly consistent with those
I found in Systemic Analysis. Rothstein & Yoon instead show that blacks
right out of law school receive large preferences. That's really not relevant
to whether mismatch effects hurt black chances in the job market. (And, as
I've argued elsewhere, the preferences in the job market often backfire on
black hires.)
Bill Henderson has done his own analyses of the BPS data on a few of these
points, with results compatible with mine; interested readers can ask him about
those.
Again, the point here is not to single out Rothstein & Yoon, whose paper
has many strengths. Rather, it is to highlight how even the better critiques too
often read like briefs on behalf of a sacred policy, rather then disinterested
investigations into the pros and cons of a controversial program.
As
I suggested at the outset, I see this debate as a sort of maturity test for the
ELS community. There are several things we, as an academic community, can do
to make progress on these issues, and to move away from what has become an
overly politicized area of research:
Pressure state bars to make bar performance data available.
Actual bar scores are dramatically better than pass/fail results to evaluate
mismatch effects across schools.
Pressure the LSAC to study the issue. LSAC has data on where most
applicants apply to law school, where they are accepted, where they
matriculate, and how they perform in the first year. It would not be
difficult to use this data as the foundation for a quasi-experimental dataset
similar (but far superior to) the data on first- and second-choice
students. Supervised by credible, independent researchers, a study of
this type could be truly definitive.
Debate and engagement. More candid discussion of specific aspects of
law school affirmative action, and greater involvement in the debate by those
with no ax to grind, will necessarily make the discussion healthier and more
robust.
My
thanks again to the ELS hosts for creating this wonderful forum and letting me
borrow the microphone.
Recent Comments