Previously: Part 1, Part 2, Part 3
In my last entry I summarized the three sets of evidence that support the law school mismatch hypothesis, and promised to examine each in more detail. I’ll start here with the third set – the first-choice/second-choice analysis – because it helps set up the others.
The BPS dataset includes some 1840 black law students who started law school in 1991. All completed a detailed survey shortly after matriculation, and somewhat more than half reported that they had been admitted to their “first choice” law school. Of these nearly 1100 students, about one-sixth reported that they had passed over their first choice school to go somewhere else. These 181 blacks are those I call the “second-choice” students. Although the study did not inquire in depth into students’ decision-making processes, most of these students indicated they had passed over their first-choice school for either financial or geographic reasons.
A black who passes up his first-choice school is still probably attending a school that used preferences in admitting him. But it’s plausible that, usually, such a student is passing up a more elite school to go to one somewhat less elite, and therefore he will be less “mismatched” than his peer who goes to his first-choice school. According to the BPS data, the average “second-choice” student has an academic index that’s 92 points lower than the average for students in his tier. The other eight hundred “first-choice” students have index scores that average 130 points lower than their tier mean. The BPS tiers are wide, of course, but this seems like a reasonable and unbiased estimate of how much “less” mismatched the second-choice students should be.
I noted yesterday that when we compare outcomes for black and white students with similar pre-law school credentials, black outcomes are substantially worse, and I attribute this gap to the mismatch effect. If we reduce the “mismatch” (i.e., the credentials gap), we should reduce the gap in outcomes. The simplest assumption is that the outcome gap will decline in proportion to the credentials gap. In other words, the second-choice blacks should shrink the outcome gap by (130-92) / 130 = 29% relative to the first-choice blacks.
Okay. Let’s look at some results. In each of the tables below I compare the first-choice blacks with a weighted sample of whites with matched entering credentials, and I compare the second-choice blacks with a similar weighted sample of matched whites. That illustrates the gap caused by the mismatch effect, and controls for any differences in entering credentials between the two groups. For example, consider first-year grades:
First-choice blacks (n=819) | Comparable whites | Second-choice blacks (n=161) | Comparable whites | |
Mean first-year GPA (standardized by school) |
-1.00 | -0.30 | -0.71 | -0.20 |
Outcome gap (difference in black / white GPAs) |
0.70 | 0.51 | ||
Proportionate reduction in outcome gap | (0.70 - 0.51) ÷ 0.70 = 27% | |||
Predicted reduction in outcome gap | 29% | |||
Statistically significant? | Yes, p < .001 |
These results are important in a few ways. First, they obviously support the mismatch theory, so far as grades are concerned: a closing of the credential disparity produces a proportionate reduction in the grade disparity. Second, the good fit suggests that our measure of mismatch (that is, how far credentials depart from the tier mean) is reasonable. But third, note that these results strongly support my claim that the black-white grade gap is overwhelmingly caused by differences in entering credentials. If blacks were greatly underperforming relative to their credentials, then improving relative credentials would not have much impact on grades. But if a 29% reduction in the credentials gap produces a 27% reduction in the grade gap, that implies that over 90% of the grade gap would disappear if blacks and whites had the same entering credentials. Not conclusive by itself, but certainly suggestive.
Ayres & Brooks (2005) and Chambers et al (2005) made much of the claim that these differences are not statistically significant for third-year (final graduation) GPAs. But this finding resulted from a simple mistake. Many more “first-choice” students than “second-choice” students dropped out (or flunked out) before graduating (presumably because of their lower grades). If the worst first-choice students have dropped out, that can easily distort the analysis. The real question is whether any of the first-choice students somehow turn things around by the third year. To test this, we can keep the original students in the sample, and impute the first-year GPA for any student whose third-year GPA is missing from the dataset (i.e., dropouts). This insures we are comparing apples with applies. Once we do this, the first-choice / second-choice GPA gap is the same (and statistically significant), regardless of whether we compare first-year or final GPA.
A similar story lies in bar passage rates. The first-time bar passage story follows:
First-choice blacks (n=643) | Comparable whites | Second-choice blacks (n=138) | Comparable whites | |
First-time bar passage | 60% | 77% | 80% | 80% |
Outcome gap (difference in black / white bar passage) |
17.0 | 0.0 | ||
Proportionate reduction in outcome gap | (17 - 0) ÷ 17 = 100% | |||
Predicted reduction in outcome gap | 29% | |||
Statistically significant? | Yes, at p < .0001 |
The difference between first- and second-choice outcomes is again large and statistically significant. If anything, the results are too powerful. Why would a 29% reduction in the credentials gap completely eliminate the first-time bar passage gap? It's probably partly random -- the numbers involved are not terribly large. It might also be a result of looking at a binary outcome (pass or fail) rather than a continuous outcome (bar scores). It's intuitively sensible that at some threshold in each state, small gains in a group's average test scores would produce disproportionately large gains in bar passage. (This relates to the "curvilinear" argument I made in "Reply to Critics", pp. 1970-71). But whatever the explanation, these results again powerfully support the mismatch hypothesis.
Now consider one more BPS outcome -- whether graduates eventually pass the bar:
First-choice blacks (n=643) | Comparable whites | Second-choice blacks (n=138) | Comparable whites | |
Ultimate bar passage | 77.3% | 87.6% | 86.2% | 90.5% |
Outcome gap |
0.103 | 0.043 | ||
Proportionate reduction in outcome gap | (0.103 - 0.043) ÷ 0.103 = 58% | |||
Predicted reduction in outcome gap | 29% | |||
Statistically significant? | Yes, at p = .02 |
Although this result is statistically significant, slight changes in specifications can push it into non-significance (that is, p > .05). Ayres & Brooks (2005) took that fact to be strong evidence against the mismatch effect. They implied that any first-choice / second-choice comparison that produces a non-significant result casts doubt on all the results. But that makes no sense. Here (and in all comparisons of bar passage), the measured improvement in ultimate bar passage greatly exceeds what is predicted by the mismatch effect. True, it is not always statistically significant, but given the size of the two samples and the fact that we are comparing binary outcomes, any difference of less than 6% would not be statistically significant at the five percent level. Demanding a significant result when the underlying theory doesn't predict it is surely an unreasonable test!
This is the general story of the first-choice / second-choice data -- two things hold up under any plausible formulation of the comparison groups and for any of eight or nine measures of outcomes. First, the second-choice group invariably shows results that equal or exceed the predictions of mismatch theory. Second, those results are statistically significant whenever they are predicted, on theoretical grounds, to be so.
Under any reasonable interpretation, then, the first-choice / second-choice data is pretty much slam dunk evidence for the mismatch argument. It therefore speaks volumes that the critics have almost uniformly opted to ignore it.
The only attempt to respond to this data has been offered by Lempert et al (2006), who picked up on a table in my Reply to Critics showing that second-choice students were much more likely to cite "financial aid" as a major factor in choosing their law school (this obviously follows from the fact that financial considerations play a role in most second-choicers passing up more elite schools). Lempert suggests that the second-choicers have better outcomes because they have more aid.
I had considered this question long before publishing my first analysis of the first-choice / second-choice data. It's an easy idea to test, because the BPS data has detailed information on how much aid of different types students receive. One need only run the first-choice / second-choice regressions with variables added for amount of financial aid. The answer: financial aid has tiny beneficial effects, but not enough to meaningfully change any of the substantive comparative results. If anything, it might help (a little) to explain why some of the second-choice outcomes are even better than the mismatch theory predicts. (I will post these regressions shortly.)
So the question remains wide open for the critics: how can your arguments be reconciled with this data?
Pat,
Forgive me if I don't fully understand your argument, especially in the first paragraph from "But the very data you cite" to the end. But let me respond to what I do, or think I do, understand. First, you are correct that the BPS data set does not contain all the iformation one would want to resolve these issues; hence some arguments must proceed from suppositions, and the issue is what suppositions are plausible enough that they fairly suggest or challenge conclusions. (In this connection, i will simply note that Prof. Sander uses assumption-dependent simulations at a number of places, and often as with his simulation about the effects on the production of black lawyers from abolishing affirmative action, I and others find his assumptions questionable.)
Second, Sander's mismatch analysis differs from the Ayres and Brooks analysis in several important ways. One way, and one reason why I and my coauthors find Ayres and Brooks work superior though we think it too does not avoid the selection bias issue, is that Ayes and Brooks use as a control group first choice blacks accepted at more than one instituion; i.e. those who could have attended a second choice institution if they wanted to. Sanders analysis uses all first choice black students as his control group. Thus when you speculate: "first- and second-choice blacks likely received, on average, the same offers of financial aid from their second-choice schools" you are dealing with a population to which your speculation cannot, for the most part, apply since most first choice students had no second choice school accepting them, much less offering them identical scholarships. Had Sander used the Ayre and Brooks first choice sample of blacks who had a choice - a group that seems intuitively more like the second choice students and so a better control than the entire first choice sample he chose to use, then it would at least be, in theory, possible that your speculation was accurate. We would also know that the first and second chocie smaple both consisted of students could enough to be admitted to more than one school.
Finally, I will simply reassert the plausibility of the kind of selection artifact I posit and its implications for the conclusion Sander reaches. Presumably if it is financial aid that lures a student form his first choice school (as I suspect it is in many, though not all, cases) the second choice school must be giving the student something more aid than, and perhaps considerably more aid than, the first choice school offered. Knowing how law schools struggle to get the best minorities, I expect that schools offering strong aid packages look closely at not just index credentials but at other factors that predict to law school success (like the courses in which grades are received and letters of recommendation.) There is thus considerable reason to believe that the second choice students are stronger on unmeasured variables, at least as much reason, I would argue, as there is to believe that students at more elite schools are strongfer on unmeasured variables than those at less elite ones. (Note in Systemic Analysis, Professor Sander argues that law schools tend to admit minorities almost nechanically on the strength of their index scores. Were this true, selection bias would not be a threat to analysis once credentials were controlled, and the mismatch hypothesis would fall because it is inconsistent with the BPS data, which show that holding credentials constant students who attend less elite schools do not do systematically better in graduating and passing the bar than those who attend more elite schools.
If you are working on the book with Sander, let me suggest that you use Ayres and Brooks first choice control sample of students admitted to more than one school who chose their first choice since this sample seems more compatrable to the second choice sample and better fits your speculations. But if you did this, I assume you would conclude as Ayres and Brooks did,, that the data do not support a finding of a mismatch effect,
Posted by: Rick Lempert | 25 September 2006 at 09:51 PM
Professor Lempert,
Your suggestion that the first-choice / second-choice analysis is biased in favor of supporting a mismatch hypothesis seems to rest on an unstated premise: that the black students who chose to attend their first-choice school were not offered the same financial aid as black students who chose to attend their second-choice school (and thus, presumably, have less impressive unobserved credentials). Yet, you offer no evidence to support this premise. Instead, you assume that since second-choice blacks chose to attend their second-choice school for financial aid reasons, first-choice blacks must not have received financial aid from *their* second-choice schools. But, the very data that you cite – that second-choice blacks placed more importance on financial aid than first-choice blacks – offers a more plausible alternative explanation: first- and second-choice blacks likely received, on average, the same offers of financial aid from their second-choice schools, but only the second-choice blacks valued financial aid highly enough to give up the opportunity to attend their first-choice school.
Your argument would be much stronger if first- and second-choice blacks placed equal importance on financial aid. Then, it would be more reasonable to assume that those attending their second-choice institutions received better financial aid packages than those who chose to attend their first-choice institutions, since students who equally value financial aid are likely to respond similarly when it is in fact offered.
(Disclosure: I’m working with Rick Sander on his forthcoming book)
Posted by: Patrick Anderson | 25 September 2006 at 03:45 PM
In acknowledging and purporting to refute the critique by me and my coauthors, Professor Sander misinterprets claims we make clear. But first some bacjground.
The reason Sander saw a need to do his second choice analysis was because he realized that his presentation in Systemic Analysis was flawed by a failure to consider selection effects. Sander could only rescue his theory by recognizing this flaw because contrary to what his mismatch theory predicted, controlling for initial credentials there is no consistent evidence in the BPS data that a student did better on the bar by going to a lower tier school, and. in particular, there appears to be considerable evidence that controlling for credentials students do better by attending elite tier schools than by attending schools in any other tier. Recognizing selection bias salvages the theory, provided the bias is strong, because one can plausibly argue that a student at an elite tier school with a particular admissions indcex score is in fact a stronger student than an admittee at a less elite school with the same index credentials. The claim is that the more elite school looked beyond the BPS credentials to other factors that predicted that the student if accepted would succeed; factors that the student who only got into a lower tier school presumably did not possess. Thus superior performance by students in more elite schools is not necessarily a reverse mismatch effect or even a general refutation of the mismatch hypothesis. Rather it may mean that the the apparently unconditional prediction of the mismatch hypothesis (holding credentials constant black students do better on the bar if they are in schools where their credentials are more like those of their white classmates) cannot be expected to hold in this form because the black students in higher tier schools are stronger students than those in lower tiered schools despite similar measured credentials.
Building on an insight of Ayers and Brooks, Sander thought he could control for selection bias by looking at students who had a choice of what school to attend and chose a school that was not their first choice. These students would be known to have unmeasured characteristics that allowed them to be admitted to a (presumably more elite) first choice school.
There are numerous problems with Sander's analysis (See our reply to his repsonse to critics at the Equal Justice Society and U Michigan SSN websites) but the only one I will mention is the one that Sander gets wrong in his current post. I and my colleagues never said, nor do I think we wrote anything that could be read fairly in context to say, that black second choice students did better because they received more financial aid, nor did we ever suggest that the more financial aid a black student rceeived the better he or she would do. (Hence Sander's regression analysis does not rebut our claim.) Rather what we said is that Sander's second choice analysis not only is unlikely to eliminate selection bias but is very likely biased in favor of his mismatch hypothesis through the same kind of mechanism that arguably biased the uncontrolled anaysis against this hypothesis.
82.1% off BPS African American law students attending their second choice school said that financial aid was very important to them compared to 47% of black students attending their first choice schools. At the other extreme 25.3% of African American students attending their 1st choice school gave no importance to financial aid, compared to 5.1% of such students atending their second choice schools. So why do black students go to a school that is not their first choice? It is likely that financial aid is a very important reason.
Controlling for index credentials and need, why does a law school give one student and not another financial aid or give a more generous financial aid package to one student than it offers another. The likely answer is that the school believes the student to whom it offers aid or a better aid package has a greater chance of success (based on variables other than admissions index credentials since we are holding those constant) than the student to whom it doesn't offer aid or gives a less attractive package. This is likely to be particularly true of students the school very much wants to recruit, like especially able minority students. Thus just as students at elite schools should be stronger on unmeasured credentials than students with similar indices at less elite schools and so do better on the bar, so should students attracted to second choice schools by financial aid packages sufficient to change attendance preferences be, on average stronger on unmeasured variables, than students at first choice institutions. Selection bias now works to favor the mismatch hypothesis where in the overall data, it worked against it. Hence selection bias cannot be excluded as the reason for the greater success that Sander sees second choice students as enjoying controlling for index credentials, just as it cannot be excluded as a reason for the absence of or even reverse mismatch effects in the overall data.
I might add that even if Sander's analysis did not have this flaw, it would not support a conclusion of strong misnatch effects, because even Sander's second chocie students are greatly mismatched when their credentials are compared to the credentials of whites at the schools they attend. There are, as I have said, additional problems with Sander's analysis. One can find our discussion of them at the places indicated above or I will be happy to send a copy of our critique.
Posted by: Rick Lempert | 25 September 2006 at 03:04 AM
"But second, how one feels in classes after starting law school probably does play a big role in performance -- if one feels overmatched, I think that's going to lead to less learning."
So... feeling overmatched can "lead to less learning" and does "play a big role."
But... feeling stigmatized "can't explain more than a tiny part of performance"
I see.
Posted by: Corey | 23 September 2006 at 10:36 AM
"and is itself largely a function of aggressive preferences"
There are two possible responses to that: 1) Blame white students who put entirely too much emphasis on objective criteria as a marker for competence and exclude blacks unfairly as a result.
2) Assume that such behavior is logical and blame the preferences as you have done.
I am not convinced by your mere assurance that stigma and hostility in law school have only a minor effect. To put it bluntly, as a white male, how would you know? All you know is that there is a performance gap that matches the credential gap within some correlation. You do not know the actual subjective cause of either, and can't get it off the objective data.
You have no idea how much being told that you are "mismatched" affects performance, and you are nowhere near the first person to assert that. Your innovation is merely toward legitimizing the discourse. Hence my conclusion that you are making your own "discovery" worse.
Confidence on day one of law school isn't the whole story. At my school, many students who later did well were drunk at orientation. Many black students who thought they were going to get a fair environment instead were marginalized into an "identity group" and presumed less competent.
"By attending his second-choice school, he hasn't proven himself to be more successful at anything, dealing with stigma or otherwise."
It is easier to cover at (presumptively less elite) second choice schools. Whether or not every student is adequately competent matters more at the more competitively oriented schools.
Posted by: Corey | 23 September 2006 at 10:18 AM
Jessica,
Thanks for your comment. I think there are three issues here. First, does your confidence going into law school shape how you do? The Bar Passage Study surveyed all participants right at the beginning of law school. Black students were, on average, more confident than white students about their future grades in law school (most thought they would be in the top quarter of their class). I suppose this might be because law schools recruit blacks assiduously; in any case, there's not a significant difference in confidence between first- and second-choice blacks. So "a priori" confidence doesn't have much to do with subsequent performance.
But second, how one feels in classes after starting law school probably does play a big role in performance -- if one feels overmatched, I think that's going to lead to less learning. The second choice students (similar to you) plausibly feel more confident as the first year progresses because they are, in fact, not overmatched -- and they learn more, get better grades, and eventually pass the bar in much higher numbers.
The third issue is whether ostracism at law school hurts black performance. I don't rule this out entirely -- blacks have more difficulty gaining entrance into desirable study groups, for example, and this may affect their performance. But the effect is small -- it can't explain more than a tiny part of performance, as I read the statistics -- and is itself largely a function of aggressive preferences (whites know that blacks have been admitted with preferences, and seek out the "strongest" students for their study groups).
Posted by: Rick Sander | 22 September 2006 at 06:44 PM
Isn't there at least one other difference, though? The student at the second choice school knows she is attending a school that is thought of as less elite and less difficult than her first choice. She probably relaxes somewhat in that environment. Couldn't that help her performance?
This occurs to me because of my own experience. I was a low-income white student with a high GPA and a high LSAT score, and I was admitted to every law school to which I applied. I chose to attend one of the lower-ranked schools that accepted me, the Univ. of Minnesota, mostly because they offered me a full scholarship. In part, though, I probably made that decision because the more elite schools seemed even further from my experience. I couldn't imagine myself being part of the class at Harvard Law School. It seems kind of silly to say that now, but then, I'm a different person than I was back then.
Anyway, though law school did seem like an alien environment, I felt pretty darn confident about my intellectual abilities in that context, having chosen on purpose a lower-ranked school, and I did really well. Maybe it was because there was no "mismatch," or maybe it was because I would have done well anywhere. Or maybe it was at least partly because I felt so confident about my chances. Isn't that at least possible? What am I missing?
Posted by: Jessica | 22 September 2006 at 05:58 PM
Corey, I think you might be missing my point. Had the second-choice black student gone to his first-choice school, he would have been a first-choice black student. By attending his second-choice school, he hasn't proven himself to be more successful at anything, dealing with stigma or otherwise. He has simply elected not to attend a school where the gap between his credentials and the median white student's credentials would be maximized. A first-choice student at his school has not made this choice. That is the only difference between the two.
Posted by: Patrick Anderson | 22 September 2006 at 05:30 PM
"The only thing distinguishing these two types of black student at the same school will be the gap between their respective credentials and the median white student at that school."
No, the black student with the more "matched" credentials is already proven to be more successful at dealing with the kinds of stigma they will also face in law school. That is, they are better at "covering." (See Yoshino)
The black student that was preferenced in might have the exact same relative ability, but be more sensitive to things like stigma and tokenization from professors and classmates, harassment by campus police, etc...
Posted by: Corey | 22 September 2006 at 04:50 PM
"If the “stigma” effect you are suggesting actually existed, we would expect to see black students performing much worse in law school than one would predict based on their incoming credentials."
Not if the credentials themselves are subject to prior hostile environment selection bias. If stigma exists in law school, then it almost certainly exists in undergrad. Black students who do well in spite of stigma in undergrad, and thus achieve credential parity with their white peers, are likely to have similar success in law school. In fact they are probably better students, having overcome greater pressure than comparable white-privileged students.
That theory can be supported by your data, if you can overcome stigma and come in with identical credentials, you can do identically well. Come in having been discouraged by stigma and hostility, and you will continue to be discouraged by stigma and hostility. (I'm sorry that stigma and discouragement is so hard to quantify)
So you all call that a "mismatch." You normalize input criteria (LSAT, Grades) to output criteria that is designed to be correlated (timed law school exam scores.) Then you argue that it is wrong to change the input standards because that would change the output standards. This looks extremely suspect to someone who is skeptical of the concept of objective merit. Your system is an echo chamber and is blind to variables like stigma that effect both the input and output measurements.
You'll get to see this argument again someday, I needed a new article topic.
Posted by: Corey | 22 September 2006 at 04:35 PM
Corey,
I’m working with Rick Sander on his forthcoming book, and so I thought I would respond to your post. If the “stigma” effect you are suggesting actually existed, we would expect to see black students performing much worse in law school than one would predict based on their incoming credentials. Thus, in a regression analysis examining what factors correlated with law school performance, we would expect to see a large and significant negative correlation between being black and performing poorly in law school, even when controlling for incoming credentials. But, this is just not the case. Controlling for credentials almost entirely eliminates the performance gap between black and white law students.
But, I think there is a more obvious problem with your analysis. In attempting to explain why second-choice blacks would perform better than first-choice blacks given this apparently uniform “stigma” effect, you write: “The effect of stigma produced by whites who believe in ‘mismatch’ will be more pronounced at more elite first choice schools because more is at stake in admissions there.” Here’s the problem – “first-choice” is a relative term, and thus it doesn’t make sense to talk about “more elite first-choice schools.” One black student’s first-choice school might be one of the lowest-ranked schools in the law school hierarchy. Another black student’s second-choice school might be a top-20 law school. Thus, for obvious reasons, at any given law school is is likely that there are both “second-choice” and “first-choice” black law students. The only thing distinguishing these two types of black student at the same school will be the gap between their respective credentials and the median white student at that school. If it is a stigma effect and not the credentials gap that explains black performance in law school, why would second-choice blacks fare any better than first-choice blacks at the *same* (or same type of) law school?
Posted by: Patrick Anderson | 22 September 2006 at 03:00 PM
"So the question remains wide open for the critics: how can your arguments be reconciled with this data?"
1) Assume that widespread disapproval of "mismatched" blacks at elite schools by white cohorts and empirical legal scholars creates a hostile, isolating, and stigmatizing environment.
2) Assume that a hostile, isolating, and stigmatized environment hinders learning and performance on tests.
If those assumptions are true, then your argument is self-reinforcing. The more you talk about "mismatch," the more black students will feel as if they are unwelcome and the worse they will do. The effect of stigma produced by whites who believe in "mismatch" will be more pronounced at more elite first choice schools because more is at stake in admissions there.
I honestly believe based on your choice of topic and method that you do not belong at a school as elite as UCLA. Does that anger you? Does it color your likely response? Reduce your objectivity? I hope then that your response isn't tested for objectivity or you might underperform based on my hostility
Posted by: Corey | 22 September 2006 at 01:26 PM