« Law School Course on Empirical Methods | Main | Invitation to Attend Conference on Empirical Legal Studies »

22 September 2006

Comments

Rick Lempert (P.S.)

I just want to add a brief P.S. to my last post which may have teaching value because it so nicely illustrates the danger of making or believing what might seem to be plausible assumptions. Rick Sander suggested that those respondents who did not respond to the question asking for the number of their bar memberships should be treated as if they had not passed the bar. I eyeballed the data array and reported that only 11 of the 35 people who returned questionnaires and did not respond to the number of bar memberships question were leaving this question blank but answering the ones that preceded it. Of these 5 were minorities. Out of curiosity I looked more closely at these five to see if it was plausible to suppose that at least these 5 had not passed the bar. What I found was that these five respondents reported they had practiced law for 4, 11, 13, 18 and 23 years. Their reported household incomes ranged from $106,000 to $520,000 with a median income of $177,000. Two reported current jobs in government, one in private practice, 1 as a corporate couunsel and 1 reported no current employment. Sander's effort to deflate our respondents' reported bar passage rate began by counting these people (and those who were skipping numbers of questions) as people who failed the bar. This exercise is a good example of why making assumptions about what non-response means is a dangerous practice.

Rick

Rick Lempert

Two days ago I wrote a reply to Rick Sanders reply to my reply to his comment, which was replying to work I and coauthors have done, some of which replies to work he has done. [Yes, a :-) is in order.] At about 12:30 a.m., as I wrote my last sentence, most of what I had written went poof and disappeared into cyberspace. A small bit may have been posted, though I don’t find it, so I will try again:

I hope readers (if there are any left) are not finding this exchange as tedious as I am. But lest silence be taken as consent I will comment on Rick Sander's responses. But first note the aspects of my comments he does not respond to. I don't think he has answers to these. Now to his points.

Sander faults me and implicitly my coauthors, (We designed the questionnaire together with me having by far the least input, but I seem to be Sander’s whipping boy. David, Terry forgive me for spreading the blame, but instead read this as giving you the credit you deserve for this project.) for not being concerned with first time bar pass rates in our study of Michigan’s minority group members in practice. Our primary concern was not with bar passage but with how UM's grads fared in practice, and questionnaire space is limited? We had to cut questions we would have liked to have asked. Moreover, we had no better way of measuring first time pass rates than we had of measuring ultimate pass rates; we would have had to ask respondents. Even had someone thought of asking about first time bar passage rates, and maybe David or Terry did, it would probably have ended up cut because there were many more salient issues to explore.

As for not checking objective evidence on bar pass rates, [“If Lempert really never made any attempt to determine objectively whether Michigan’s bar passage data could be reconciled with his survey findings, that’s quite a commentary on his work.”], I might quote Sander’s blog, [“Determining what has happened to black bar passage rates is much harder; the states generally don’t even keep bar passage data by race, much less disclose it”] on the likely futility of such an effort. Michigan’s graduates take bars in many states, these states for the most part don’t report results by race or school, our data covered a 27 year period, and old data are harder to acquire than recent data. Moreover, there was no special reason to expect our respondents to be untruthful in reporting bar results; bar passage was not the focal point of our study; first time bar passage rates had no direct relevance to questions we were interested in, and life is short - what would Sander have had us do?

Sander continues to try to justify combining tier 1 and tier 2 schools in estimating minority bar pass rates, but the justification doesn’t hold. Although, as I explain, it is irrelevant whether Michigan is in tier 1 or tier 2 in deciding what the appropriate focus for a base rate estimate is, the odds are good Michigan is in the BPS 1st tier. Sander and I both remembered wrong; there are 18 schools in tier 1. (See Wightman’s 1987 NYU L. Rev. Article, p. 24) Moreover class sizes are not what Sander reports, as his reporting is apparently based on BPS participant numbers. Rather according to Wightman, the average class size for tier 1 schools is 235 and the average class size of tier 2 schools is 489. Michigan’s class size, which Sander sees as a dispositive datum, is about 360, which is squarely in the middle. There are 2 public schools in Tier 1. The most likely candidates are Virginia, Michigan and Berkeley. But the average tuition in this tier, one of the variables on which the schools are clustered, is above $13,000. Michigan as a state school has long had relatively high tuition and the blended rate, considering about 2/3 of the attendees are out of state, was almost certainly far higher than Berkeley’s, a school of about the same size and selectivity as Michigan. So even if Virginia was a better candidate for tier 1 than Michigan because of its size, Michigan is probably the other public law school in this tier.

But as I have said in an argument Sander does not respond to, this is irrelevant. Sander is trying to estimate how Michigan’s black students will do on the bar. Will they have pass rates closer to what one finds across the tier one schools or closer to the average of tier 1 and tier 2 pass rates. The quality of Michigan’s students puts them squarely in tier 1 and quite a bit above the mean for tier 2; so their expected bar pass rates should be like those of the tier one schools wherever Michigan is located. Sander’s failure to reply to this point about student quality, makes his effort to justify his combined tier 1 and tier 2 reference group hardly worth replying to. In brief , he tries to argue, based on fragmentary data, that Michigan’s overall bar pass rate is like that for tier 1 and tier 2 schools combined. Problems abound. Sander’s data are for 1st time bar pass rates; it is ultimate bar passage rates that we report, and as Wightman indicates many students pass the bar on second or third tries. Moreover, if Sander is going to use this datum to indicate that Michigan does not belong with the other tier one schools in terms of likely minority pass rates, he should present us with some information on the first time bar pass rates of other tier one schools. Last I looked at California data, Michigan has some years where it does better in terms of pass rates than comparable quality schools and some years where it does worse. Depending on the year, Sander's extrapolation from California and a few select other bar passage rates might indicate that no school belongs just in tier 1.

There is, however, an easier way to show that Sander’s estimate of Michigan’s minority pass rates is way off. [“the estimate that 77% of Michigan blacks ultimately pass the bar seems very generous”] According to Wightman’s numbers (for reasons of time, I haven’t double checked these with my own analysis of the BPS data set) in her NYU article, about 75% of those blacks who took the bar exam passed. (Table 8) The far lower overall rate of black bar passage given entrance to law school reflects the fact that many black students never graduate. However, at Michigan we know from school records that almost all black students who enter law school graduate. So what Sander would have readers believe is that the final bar passage rate of Michigan law school graduates is statistically no different from the collective bar passage rate of black students who graduate at all the nation’s law schools, even though the entering credentials of Michigan’s black students probably placed them in the top 3 to 7% of black students matriculating at law school in the BPS cohort (A few points lower perhaps if we limit the comparison to black students who graduate from the nation’s other law schools, but still almost certainly in the top 10%). Sander’s estimate of identical final bar pass rates defies credulity.

My original comment on this matter was in part to illustrate my claim that Sander systematically biases tests in the direction of his favored hypothesis. I think the point stands, but let me make my critique of Sander in its weakest form for any who still aren’t sure how to sort out different views of the proper reference group: at a minimum, Sander should have rerun his estimates using the tier 1 figures alone as the reference standard and reported these results because this would have put the hypothesis he wanted to prove to a harder test.

Sander’s original blog commentary provides another example of assuming facts to favor a hypothesis which is instructive not just as a criticism of Sander’s work but because it nicely illustrates the danger one invites by asserting assumptions as if they were facts. One step in Sander’s attempt to wheedle down our finding that 96.3 ( not 97%) of Michigan minority (not black) alumni respondents passed the bar is: ["However, another fifteen blacks skipped this question. If we assume these respondents also did not pass the bar, the proportion falls to 322/(322+12+15) = 92.2%"] Sander, in other words, assumes we know exactly why some respondents did not answer the question about the number of bars they were members of and would have us fill in the answer for them. Those who didn't answer, Sander Assumes, didn’t answer because they had never passed the bar. This may strike some readers as reasonable, after all why should people not list their bar memberships if they are members of one or more. There are lots of reasons: they may be tired of answering questions, they may be leaving many questions blank, they may be uncertain of how many bars they still belong to or of how past bar memberships should be counted, they may have been a member of a bar and let it lapse after moving to another state and are puzzled how to respond; one cannot know. What one does know is that, although a large number of non-responses may be worth noting, a researcher should not be filling in answers respondents leave blank on the basis of subjective guesses as to what was meant, especially if the subjective guess favors one’s hypothesis. (There are statistical techniques of imputation which do fill in responses, and it often makes sense to use them, but these judgments are probabilistic and based on statistical models, not on a researcher inserting a desired response.)

Now consider what the data show. There were 18 more questionnaires returned by minorities than there were answers to the question about number of bar memberships and 17 more questionnaires returned by whites. I looked at the raw data array. I scrolled through a computer screen which had the bar membership variable in the rightmost column and room for ten other variables in columns to the left. I only saw 11 cases where there was a non-response code for bar membership and positive responses to the variables to the left. In other words, it appears that 24 out of the 35 people who returned questionnaires and failed to answer the bar membership question had either returned blank questionnaires or were leaving large numbers of questions blank. There is no reason to suppose that anyone in this group was skipping the bar membership question because he or she was nor a bar member. Of the 11 people who selectively skipped the bar membership question, 6 were white, 4 black and 1 another minority. Since our focus on the bar passage table was on on whether there were differences in the rates at which minority and white Michigan graduates passed the bar, even if all non-respondents were refusing to give an answer of zero bar memberships because they were ashamed of this or thought a blank was the same as a zero, the statistical implications would be nil and the effects on overall pass rates tiny. (I might add that in our discussion of response bias - see also below - we note that respondents may be very slightly stronger than non-respondents. Hence it may be that a slightly lower proportion of Michigan’s students, both black and white, passed the bar than the number of respondents reporting bar passage indicates, but even if bar pass rates are reduced by a percent or two, the closeness of minority and white bar passage rates should not be much affected.)

Sander also tries to justify his suggestion that the PDS minority bar pass numbers are vastly overinflated by indicating that over 60% of Michigan’s black students have ended up in the bottom 10% of the Michigan class and extrapolating from there using the inappropriately combined figures from tier 1 and tier 2 schools. Putting aside the fact that Sander should have looked at all affirmative action minorities and not just blacks if he were questioning the results we reported (it wouldn’t make much difference in the overall picture, however), his estimate of the proportion of Michigan’s black students graduating in the bottom 10% is about right for the full sample, although the proportion is noticeably lower in the more recent years of our time series. However, what Sander ignores and another reason his extrapolation is unjustified is that Michigan is so selective that most students, at Michigan, including most in the bottom 10% of the class perform at a reasonably high level and have no trouble passing their law school classes. Thus the Michigan's 10th percentile cutoff is closer to a B grade than to a C. Suppose affirmative action were abolished at Michigan. There would still be a bottom 10% in the class. Is there any reason to think that these students would have an especially difficult time passing the bar? Looking at not the bottom 10% but the bottom 5% of graduates of elite schools, we still find that 59% of matriculants graduate and pass the bar, and Michigan’s selectivity probably places it in the top half of the schools in this tier in terms of black student quality. (Higher, ironically, if Michigan and some other elite schools really are in the 2nd tier.)

Because of Sander’s response, I went back and looked at some PDS data. Here is one interesting statistic. If one looks at the 10th percentile of white and Asian students alone, almost 70% of Michigan’s black students fail to exceed that level, even though most are in no danger of failing and will often have numbers of B level grades. If one looks at the index ranking of entering students (in our study a combination of how students ranked on LSAT scores and UGPA) 85% of Michigan’s black students have index scores below the 10th percentile of the index ranking of Michigan’s combined white and Asian students. Thus in comparison with their index credentials Michigan’s black students seem to be performing as a group considerably better vis-a-vis their white and Asian competitors than their index credential rankings would lead one to expect. This is not what one would expect if the performance of black students at Michigan were being depressed by mismatch effects.

On other matters: Sander does not justify his uncalled for rhetorical question, that seems designed to cast aspersions personally on me, about data sharing, nor does he acknowledge that the issue of sharing the data was not for me to decide. The data would, or course, have been available to the Plaintiff’s in Grutter under a protective order for their analysis had they wanted it. I don’t recall if this was among the Michigan data sets they requested and were given. Moreover in his reply the personal aspersions do not stop. Thus Sander writes in his response that I was not sharing data [“during the Grutter litigation, when he was making poorly-supported statements about what the data showed.”] It is possible that I did make a misstatement or two in several hours of testimony - one sometimes mis-remembers or misinterprets information. But I didn’t make any false claims on this point. The testimony he quotes in his blog: [“Our study finds that Michigan, just not to put too fine a point on it, Michigan graduates pass the bar. It doesn't matter, really, whether you're a minority or whether you're white. In one decade, in the 1980's, I'm not going to bother to look at the table, but I think there might have been a statistically significant difference favoring whites, but it was substantively sort of completely trivial. It was like 95 percent of minorities and 98 or something or 99 percent of whites…”] is entirely accurate. (See River Runs Through Law School at p. 422)

On response bias anyone who is seriously interested in whether this is a serious danger should read what we did to check for bias in the original article. Rather than deal with the full range of our efforts and offer a global assessment, Sander tries to find one check, which taken by itself might suggest a problem. The best he can do is: ["Lempert reports that using sources like legal directories, they were able to locate “70% of our minority nonrespondents” and all but 134 of the respondents. Since there were over 1200 respondents, this means they could find 89% of the respondents through directories but only 70% of the minority non-respondents. That should have been a tip-off as well."] The location effort Sander is referring to was an effort to locate current work settings independent of whether sample respondents returned their questionnaires. One of our sources was workplace addresses in our alumni office data base. It is not surprising that alumni loyal enough to keep the law school informed of their business addresses should return questionnaires at a higher rate than those who had done less to keep contact with the law school, and these alumni were easy “finds.” No searching of legal directories was necessary. If we look at whites who did not respond and whose business addresses we sought to locate from the various sources we had available, we find that we could locate 73%, virtually identical to the black non-respondent employment location rate, with the 3% difference being exactly the same as it was among those returning questionnaires. Thus our ability to identify the employment of black and white non-respondents was about the same, and the reason we located more employment data for respondents is almost certainly because we had better alumni records on them.

More importantly, we engaged in this exercise not to determine the jobs held by respondents; we knew this from their questionnaires. Rather it was an effort to assess whether people whose current employment could not be determined from non-questionnaire sources were unlikely to have practiced law. The non-locatable respondents were, as one might imagine, substantially more likely than locatable respondents to have current careers outside the practice of law. However, only 15% had not practiced law at some time in their careers. If the same rate holds for non-respondents, and if we assume that everyone who had not practiced law had not passed a bar exam, then the estimated bar failure rate of our minority non-respondents would be 4.5% (.15 x 30%), about the same rate of failure to take and/or pass the bar that we found among our minority respondents. (The estimate of non-passers may be a bit low if some of the 70% whose employment we located from law school rather than from bar or law firm records were not practicing law, but it may also be a misleading on the high side if some people, like some law professors I know, never took the bar but clearly would have passed it had they taken it.)

There is one final point to make about sample bias. Most relevant to current policy is more recent experience. Michigan’s minority graduates during the decade of the ‘90s (1990-96) look more like their contemporary white graduates than the minority graduates of prior years do. Yet, response rates for whites and minorities are closest among graduates of this decade and the difference between white and minority response rates, which is less than 5%, is not statistically significant. In short, Sander’s assertion that there is a substantial difference in the likely bar performance of our responding and non-responding minority graduates is just that, an assertion, but it is inconsistent with what the data show. (Slight differences, as we acknowledge, may exist; the average minority non-respondent has an LSGPA that is .077 less than the average respondent on a scale where a C average is 2.0, an A is 4.00 and pluses add .5 to a grade. The average white non-respondent has an LSGPA that is .122 lower than the average white respondent. Much of what we do in our article is to compare white and black performance; these comparisons are unlikely to be affected by even this small response biases, given that it is shared by white and black non-respondents and is somewhat greater for whites, thus offsetting the somewhat higher white response rate.)

In response to my claim that Sander’s seeming mismatch effect depends on his use of first time bar passage rates, Sander writes in part, “Completely untrue. Rothstein & Yoon report evidence consistent with a mismatch using ultimate bar passage as the outcome variable.” Let’s let Rothstein & Yoon speak for themselves:

["Our comparison between more- and less-selective schools offers no indication of negative effects of selective schools on education outcomes. Although school selectivity lowers class rank for both black and white students, this mechanical effect does not translate into later outcomes: Students attending more selective schools are more likely graduate from law school, are equally likely to pass the bar exam—a requirement to work as a lawyer—and earn higher post-graduation salaries.

Results of our between-race comparison are more mixed. Black students attend dramatically more selective schools than whites with similar entering credentials. They are also significantly less likely to graduate or pass the bar exam, but obtain better jobs. Black graduation and bar exam underperformance is concentrated at the bottom of the credentials distribution; among students in the upper four quintiles blacks perform as well or better than whites on every outcome except class rank.
Although three quarters of black students in our sample fall in the bottom quintile, the black-white comparison is potentially biased in this range by endogenous selection into law school....
We interpret our results as demonstrating that there are no mismatch effects on the graduation and bar passage rates of the most qualified black students: Any mismatch is restricted to the lowest-scoring students, few of whom attend the most selective law schools. For these students, the data are consistent either with mismatch or with differential sample selection, so do not support strong conclusions. There is no indication of mismatch effects on any black employment outcomes, though the presence of affirmative action in the job market means that these results are not directly informative about mismatch effects on academic achievement."]

Thus by one test, Rothstein and Yoon find no evidence of a mismatch effect and by another they find ambiguous evidence that does not support a strong conclusion. In addition they find no evidence of mismatch in the range from which almost all minorities who enter elite law schools are selected. (In case the reader has forgotten by now - quite understandable - this exchange with Sanders is about whether black students in elite schools are harmed by mismatch.) Moreover, even if a mismatch effect exists within a certain range of the data, it is an effect identifiable in the aggregate and would not affect the bulk of black students admitted through affirmative action.

Finally, Sander finds my claim that he relies largely on significance tests for determining what matters, and generally ignores the magnitude of effects. “An odd comment.” Readers interested in resolving this disagreement should review the argument structure in Systemic Analysis, where a data set of more than 20,000 means that it is easy to get significant relationships where variable effects are tiny. They should also note at pages 482-83 Sander’s confused if not misleading explanation of what a t statistic above 2 signifies.

I have come to find these exchanges with Sander both time consuming and frustrating. If Sander responds to this, I may not be able to resist replying if I think he has gotten something terribly wrong, but I shall try to refrain since most of the arguments we make are elsewhere in print or on the web, and if I am getting tired of this, I can imagine how readers - if any are left - must feel. So if Sander wants it my instinct is to let him have the last word, but readers should not read an inability to reply into silence, and if anyone would like especially to know how I would respond to a particular argument, he/she should write to me and I will reply. I am not unwilling to admit error and will do so if I think I have made one.

I am also frustrated by the exchange because it seems not to have gone anywhere. There are a few matters on which data or theory are so clear that I would have thought it easy to establish consensus, just as I and my coauthors have never argued that the BPS data mistakenly underreports the performance and outcomes of black students absolutely and relative to whites. So in conclusion I would like to pose a few questions to Sander and ask him to indicate whether he agrees or disagrees.

1) Does the second choice analysis in Reply To Critics implicitly repudiate the methods of analysis used and important assumptions made in Systemic Analysis, even if it reaches the same conclusion? (I believe it does.) Whatever we think of the argument in Reply to Critics, can we conclude that the case for a mismatch analysis cannot rest on the results presented in Systemic Analysis? (I think we can reach this conclusion.)

2) Should the claim in Systemic Analysis that without affirmative action the nation would produce more black attorneys than it does with affirmative action be discarded? (I believe it should be.)

3) If the argument in the first portion of Systemic Analysis that law schools admit black students almost entirely on the basis of LSAT Scores and UGPAs were true, wouldn’t selection bias pose minimal threats to a model that included these variables, making the second choice analysis unnecessary? (I believe it would, apart from a small amount of potential bias due to student self-selection.) Assuming agreement and we had to choose between the existence of selection bias in law school admissions or the positing of an almost complete mechanical admissions system, should we prefer the selection bias or mechanical admissions hypothesis? (I would opt for selection bias.)

4) If we just look at the raw BPS data on bar passage by school tier and index credentials, don’t we find that (a) there is no consistent pattern favoring the mismatch hypothesis, and (b) that results for elite schools are away from the direction the mismatch hypothesis predicts (i.e. controlling for index credentials black students attending elite law schools appear more likely to pass the bar than similarly credentialed students at lower tier schools.)?

5) Isn’t it true that the second choice students in Reply to Critics appear substantially stronger than the first choice students on variables we can measure? Is there any dispute about the fact that 60.3% of the first choice black students are in the lowest admissions index decile compared to 36.5% of the second choice students and 28.8% of second choice students are in the 4th admissions decile or higher compared to 14.1% of the first choice students? If these first and second choice students differ to this degree on what we can measure, isn’t it likely that the first choice students are also stronger on variables we cannot measure? (I say it is.) If second choice students are stronger than first choice students on unmeasured variables, doesn’t this introduce selection bias into the second choice analysis in the direction of the mismatch hypothesis? (I think it does.)

I assume Rick Sander and I and my colleagues David Chambers, Terry Adams, Tim Clydesdale and Bill Kidder, are, or have been, involved in the same larger enterprise. We all wish to understand how black students perform in American law schools, both absolutely and relative to white students, and we wish to understand reasons for the level of black performance and how it may be improved. We all also seem concerned with maximizing the number of competent black lawyers our nation’s law schools produce. Thus I have posed these direct questions. If answered directly, they can sketch out areas of agreement where we need not delve further and sharpen areas where our views of what the data show conflict. I would be happy to answer similar brush clearing questions that Sander might pose.

Rick Lempert

Rick Sander

Rick Lempert’s comment, above, is noteworthy for how little it responds to my post. I’m pointing out that Lempert’s repeated claim that 97% of Michigan’s black graduates become lawyers can’t possibly be right. It is contradicted by his own dataset (respondents who did not indicate they passed a bar exam should have been treated as not passing the exam). It is also wildly inconsistent with the available data (from state bars) on how Michigan graduates do on the bar exam. A more plausible number is 77%. What does Lempert have to say about the issue at hand?

1) “It never occurred to me to ask for first time bar passage rates.” Amazing, if true. Ultimate bar passage – the thing Lempert says he’s interested in – is hard to measure directly, but it is a predictable function of first-time bar passage. Most of us who teach at law schools, let alone those who specialize in studying legal education, have long been aware of tremendous racial disparities in bar passage rates, in our own school and elsewhere. If Lempert really never made any attempt to determine objectively whether Michigan’s bar passage data could be reconciled with his survey findings, that’s quite a commentary on his work.

2) “Michigan is widely regarded as one of the nation’s elite schools”, therefore, it’s inappropriate to use data for the top two tiers of the BPS data (rather than just the top tier) as a comparison base. I agree that Michigan is a top ten school. But the BPS tiers aren’t based only on eliteness (failing to recognize this has compromised several Lempert analyses); they are also based on size and cost. For example, the sixteen (not fourteen, as Lempert says) Tier 1 schools had an average of 150 participants in the BPS study (suggesting schools like Yale, Northwestern and Stanford). The fourteen Tier 2 schools had an average of about 330 participants (Michigan has about 360 students per class).

This is, in any case, not the central point. Whatever Lempert thinks of Michigan’s tier in terms of prestige, I combined Tier 1 and Tier 2 because I’m interested in Michigan’s position in terms of its graduates’ outcomes. Those schools as a whole have a first-time bar passage rate of 91.2%. Based on the available data from the states of Michigan and California (covering somewhat less than half Michigan’s grads), Univ. of Michigan’s first-time bar passage rate is about 88%. If we could include other states with many Michigan grads (Illinois, New York), that rate might inch up some, but it seems very unlikely that the overall UMLS rate is any higher than 91.2%. That means the BPS Tier 1&2 data is a good scaling device to see how Michigan’s overall first-time bar passage rate translates into ultimate bar passage rates for everyone and for each racial group. Based on this scaling and other observable facts (like the huge black-white grade gap at Michigan), the estimate that 77% of Michigan blacks ultimately pass the bar seems very generous.

So, to summarize: (1) Lempert’s 97% figure is inconsistent with his own data; (2) it’s even more inconsistent with the available bar data; (3) a responsible researcher would have been suspicious of the 97% figure and would have checked things out more carefully.

Lempert makes several other comments not particularly relevant to the subject at hand:

--“What Sander does not say is that he has the [PDS] data.” Of course I do; that’s how I know about Lempert’s non-inclusion of people who skipped the bar question in his survey. But, as Lempert concedes, he was unwilling to share the data with me (or presumably any other researcher) during the Grutter litigation, when he was making poorly-supported statements about what the data showed.

--“We were acutely sensitive to [response rate bias]…please look at [our] article and see what we did to check these possibilities.” Yes, please do. I would especially direct readers to p. 405 of The River Runs Through Law School and the accompanying footnote 12. Lempert reports that using sources like legal directories, they were able to locate “70% of our minority nonrespondents” and all but 134 of the respondents. Since there were over 1200 respondents, this means they could find 89% of the respondents through directories but only 70% of the minority non-respondents. That should have been a tip-off as well.

--“Sander’s claims [about mismatch effects] are only supported when he uses first-time bar passage.” Completely untrue. Rothstein & Yoon report evidence consistent with a mismatch using ultimate bar passage as the outcome variable, and my fourth post on this website discusses the first-time vs. ultimate-bar passage data in detail (and note that I report a significant result for ultimate bar passage differences in the first- vs. second-choice data). The mismatch effect has been demonstrated in dozens of ways with many different outcome variables.

--“One general problem I have [with Sander] is that he relies largely on significance tests for determining what matters, and generally ignores the magnitude of effects.” An odd comment. Again, my fourth post discusses the relationship between the magnitude of mismatch results and their statistical significance in detail. And in my 2005 Reply to Critics, I reported both the significance and the magnitude of all my first/second-choice results. Lempert, in his article in the same issue (top of p. 1888) discusses these same results only in terms of significance results. Note, too, that I report all of my results – both significant and non-significant. Lempert selectively reported only those he thought were non-significant, though even here his analyses were essentially wrong.

Lempert needs to stop making unsupportable accusations and start providing some explanations of his own empirical claims.

Rick Lempert

Rick Sander has chosen to use his ELS platform to advance his claims about an affirmative action mismatch effect and to refute critiques by me and others. In this day's comments he also seeks to question work that David Chambers, Terry Adams and I did. Not only do I think that his criticisms fail to stand up to scrutiny, but I also think their flaws typify flaws one sees in his other affirmative action research.

Rick Sander criticizes the data that David Chambers, Terry Adams and I present in our study of Michigan Law School's grads, suggesting it is seriously biased in favor of affirmative action. The study he criticizes appears in Law & Socal Inquiry's Spring 2000 issue. Anyone prone to accept Sander's critique about sample bias in this study should read our article's treatment of the topic. We were acutely sensitive to the possibility that differences in white and black response rates might distort our results, and in many ways checked to ensure that sample bias was not a serious threat to our analyis. We compared the performance of respondents and non-respondents while in law school,followed careers of non-respondents in Martindale and Hubbel and other sources,and in other ways took pains to ensure that non response and differential response rates of minorities and whites did not threaten our conclusions. We concluded that while there was a slight suggestion in the data that non-respondents were less successful than respondents, the tendency was slight and unlikely to be a threat to conclusions one might reach from the data. If Sander's critique makes sense to you, please look at the article and see what we did to check these possibilities.

Sander is correct that our concern was with whether UM graduates passed the bar eventually rather than the first time they took it; indeed it never occurred to me to ask for first time bar passage rates. Why does this matter? My hunch is that this would not have mattered for Sander either except that when he did his second choice analysis - which as Bill Kidder, Tim Clydesdale, David Chambers and I have shown has numbers of serious flaws (see Equal Justice Society's web site for this critique or the UM Law SSN site)- his claims were supported only when he used first time bar passage rates as his dependent variable. [Since Sander could have claimed success for his mismatch theory had first time or ultimate (or both) bar passage rates supported his hypothesis, the results for the mismatch hypothesis test(based on first time bar passage) should not be accorded the significance levels he claims because his hypothesis had more than one chance to be proven correct.]

The significance level issue is, however, a relatively minor flaw compared to others in Sander's analysis. The use of first time bar passage rates as the primary dependent variable, and the lack of attention to eventual bar passage rates is one of several more important ones. In Systemic Analysis the conclusion to which Sander's builds is that affirmative action actually depresses the number of blacks who enter the bar. If this is what he has attempted to prove (he suggests this was a side issue in his response to us but the prose in Systemic Analysis suggests otherwise, and it was the most attention-getting claim he made) first time bar passage rates have no relevance whatsoever. One enters the bar when he/she passes it, whether on the first or tenth time.

Sander's justification for using first time bar passage rates and largely ignoring eventual bar passage rates is, to my mind, unconvincing and seems a post hoc rationale. He disparages eventual bar passage rates as a valid measure because:
The mismatch effect occurs during law school. If a student passes the bar on his fourth attempt, two years after law school, he may well have partially offset the mismatch effect by hiring tutors, taking a variety of bar-preparation courses, and other work aimed at learning what he didn’t learn in law school. First-time bar attempts are the best measure of what was actually learned in law school.

Here are some problems I have with this argument. First, accepting Sander's speculations, one can ask if mismatch effects are so easily avoided with post law school efforts, why should we be concerned about them? But I don't accept Sander's speculation. As I recall, Wightman's 1997 NYU L. Rev. article finds that blacks who fail the bar the first time are considerably less likely to retake the bar than whites; indeed Wightman suggests that a non-triveal proportion of the difference between black and white eventual bar passage rates is due to the fact that having failed the bar blacks are considerably less likely than whites to persist and take the bar again. Also we know that blacks tend to be financially needier than whites; they are less likely to be able to afford continued tutoring, more bar exam courses and the like. Indeed, a possible and perhaps likely contributor to the difference between white and black first time bar passage rates is that lower wealth means that blacks are less likely than whites to invest in high quality bar review courses the first time they take the bar because the potential money saved ,if the bar can be passed without an expensive course, means more to the average black than to the average white.

Sander's comments here also illustrate problems I have with some of his other analyses. One general problem I have is that he relies largely on significance tests for determining what matters, and generally ignores the magnitude of effects. There is an analogous issue here. Are first time bar passage rates a better indicator of what was learned in law school than eventual bar passage rates? The claim is plausible, but surely the magnitude of such a difference is not so great as to obliterate any strong mismatch effect when eventual bar passage is the measure. So even if Sander's were correct to prefer his measure as the more valid indicator of law school learning, the lack of difference with eventual bar passage rates would be important because it suggests that whatever effects exist are not so strong that we should be basing important policy decisions on them. (I should add that although the claim that first time rates is a somewhat more valid indicator of law school learning than eventual bar passage rates is plausible, a number of plausible rival hypotheses remain. First-time bar passage might, for example, be disproportionately affected by unfamiliarity with the multi-state test format, which might mean that eventual bar passage rates - which includes results after test format familariy has grown - is the more valid measure.)

Even more telling, and something the casual reader would not appreciate, is that Sander's choices of variables and methods (like the choice of first time bar passage rates) seldom seem to put his favored hypotheses to the most severe test, and sometimes introduce biases that call his claims into question.

Consider his attempt to show that the estimate of the bar passage rate of Michigan's minority students which Chambers, Adams and I present in our LSI article is wildly off. The flaws in this effort typify the flaws I see in numbers of his analyses.

Michigan is widley regarded as one of the nation's most elite schools. Even if it was for some reason included in the BPS secod tier, which seems most unlikely, the schools most like Michigan from the point of view of the quality and likely bar success of Michigan's black students would be those in the BPS top tier. For some reason, which is not clear in the data, after controlling for admissions credentials, blacks at the the 2nd BPS tier schools seem to do more poorly relative to black students in most other tiers than one would expect. Thus by combining the two top BPS tiers, rather than examining black student performance only in the tier that not only seems most likely to contain Michigan but also, and more importantly, contains black students with credentials most like those of the black students who attend Michigan, Sander strongly and unfairly biases his analyses in favor of the conclusions he wants to reach.

Sander's suggestion that Michigan might be in the second BPS tier and thus his decision to combine schools in tiers 1 and 2, would, however, strike most readers not familiar with the BPS data as a reasonable decision. But in fact it is not reasonable to combine tier 1 and 2 schools together as a Michigan benchmark. If we just look at BPS blacks in tier 1 schools, we find that close to 89% of them passed the bar. Add to this the following facts that Sander ignores: (1) there are, if I recall correctly, 14 schools in the BPS top tier, and Michigan's U.S. News' ratings was seldom if ever lower than 7 during the study period where these ratings overlapped. Thus Michigan was a higher status school than at least half the schools in the elite tier, a factor likely to be reflected in the quality of the school's black students. (2)Michigan's graduates seem more likely than the graduates of many other elite schools to take the Michigan and other Midwestern bars; bars that are not reputed to be among the nation's hardest to pass. (3) The bar passage figures we present include Native American and Hispanic test takers as well as blacks, and students from the former ethnic groups may have higher bar passage rates than blacks. Thus the figures we report in our study are far more plausible than the 77% figure that Sander derives from his flawed, assumption-based simulation. Since Sander knows the bar passage rate among black elite tier law school graduates is about 89%, (we present the data in our critique of his Stanford piece) his offering of the 77% figure as an estimate of Michigan's black bar passage rate is not just questionable in itself, but should caution readers against accepting many of his other seemingly confident claims.

Sander's personal crique of me and my testimony is barely worth dignifying with a reply. Suffice it to say the testimony he quotes me as giving accurately reports what our study found and accurately indicates that Michigan's minority graduates are overwhelmingly likely to pass the bar. Sander also asks a question that seems to carry the implication that I and my coauthors had an obligation to release the PDS data which we did not meet. What Sander does not say is that he has the data. Nor does he say that when he first asked for these data (in connection with his invitation to respond to our study in the LSI) the law school was in the midst of the Grutter litigation and, as we told him, for this reason we could not release any data. Several years later he renewed his request for the data. This raised issues of how to protect the confidentiality of our respondents since unlike the BPS data, all our respondents are from a single school which means that reidentification of respondents would be easy, especially if respondents are parts of small minorities such as black student who graduated in 1985 and are partners in law firms of 50 or more lawyers. Hence our ethical obligation at this point was to not release the data. After Sander's request for the PDS was received, Terry Adams invested considerable time and effort into putting the data in a form which, coupled with certain commitments to confidentiality, allowed us to give Sander the data he requested. One who reads what Sander has written would think we had failed in an obligation and, in particular would think that I had failed in an obligation, when I never was the custodian of the data and never had and still don't have the data set in a form to release it.

I want to make one final comment on Sander's critique of our LSI study. In our research on Michigan graduates we chose the variables to analyze and the kinds of analyses to do based on a priori theoretical considerations with little exploratory data analysis. We chose variables because we thought they would be of interest and not because we expected them to make Michigan's minorities look good. We presented what we found and never chose a variable or way of analyzing it because it made affirmative action look good, nor did we suppress a result because it made affirmative action look bad. We did, after completing our initial analyses, do extensive sensitivity checking, looking especially at gender and race interaction effects, and where we thought they qualified our analyses we indicated this along with our original results. We also challenged some of our results by doing reanalyses with different specifications, but did not report these when they supported our original findings. Limiting exploratory analysis was a core methodological principles because we knew our topic was controversial and that our work would not only have policy relevance but might also be questioned in litigation. We also knew that if her lawyers wanted it, the plaintiff would have access to our data. Whether or not the plaintiff chose to reanalyze our data, we wanted to be able to testify under oath that we had done nothing intended to skew our findings so as to favor affirmative action, and we knew of no step we had taken unintentionally that might do this. I commend this approach to others working on sensitive issues, even if they do not think they might have to testify to their findings. I believe it is how good social science is done. In reading Professor Sander's work, one should ask whether he has taken this approach.

Corey

Oooooh, I see, the bar is OK because it is correlated to grades. And grades are OK because they are correlated to the LSAT (loosely)

As long as you can keep pointing your stats back to a prior "merit" criteria then you get to avoid answering the challenge I make. Which is: The whole system is shot through with bias, hostility, and stigma, which starts practically from birth, and which unsuprisingly causes effects that show up on every test. We can't correlate the results to a sample unharmed by stigma because one does not exist.

I am not denying that there is a performance gap on the LSAT, grades, and bar passage. That would be silly. What I am saying is that if I blind you, then you are going to be bad at the LSAT, at law school exams, and at the bar. I could accomodate your blindness and make the test in braille, but there are still two problems: 1) the accomodation is a poor substitute for being fully sighted, and 2) your blindness is still my fault.

So maybe preferences are not a perfect accomodation/substitution for the effects of lifetime stigma and hostility, but that stigma is still partially our fault.

Rick Sander

Corey,
As I've noted earlier in the week, the disparities in bar passage rates are not caused by race. There are a wealth of studies (e.g., those of Stephen Klein) showing that controlling for grades in law school washes out the race effects. We see a disparity in racial outcomes largely because we place minorities (blacks in particular) in settings where they are highly likely to get very low grades.

Corey

"Blacks at elite schools clearly have significant trouble with the bar."

We should figure out what is wrong with the bar exam then. It is scandalous that after students pay over 100K to the state for an education and graduate from Michigan Law, the state bar applies a biased test with such an obvious discriminatory impact.

The comments to this entry are closed.

Conferences

December 2024

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

Site Meter


Creative Commons License


  • Creative Commons License
Blog powered by Typepad