Over at the Legal Professions Blog, my good friend Jeff Lipshaw, equipped with an excel spreadsheet, a map, and a ruler, has taken on the perennial claim that the Lawyer/Judge input variable in U.S. News contains a "coastal bias." His analysis says no, but Jeff has asked for some help.
Using a simple OLS model, my dependent variable is 2006 Lawyer/Judge Reputation. My independent variables include Academic Reputation, 75th percentile LSAT (which is good proxy for total rank), a west coast dummy (Arizona, California, Oregon, and Washington; 27 schools = 1) and an east coast dummy (Delaware, Maine, Maryland, Massachusetts, New York, New Hampshire, New Jersey, Pennsylvania, Rhode Island, Vermont, and Virginia; 54 schools = 1) . Here is a summary of the results, which corroborate Jeff's theory:
Obviously, the signs for both coasts are negative (i.e., no systematic boost for either coast), and the p-value for East Coast implies a statistically significant penalty for being an east coast school. Perhaps schools far away from the major east coast legal markets are destine to seem better than the nearby law schools that firms know well but tend to run down. The adjusted R-squared for the model is 75.7%.
We might be able to specify a better regression model, or recode with a better definition of "coastal", or correct for some minor heteroscedasticity. But anyone who thinks they can salvage the bias theory is probably kidding themselves.
Jeff, you should plan on spending the summer in Ann Arbor at the ICPSR statistics boot camp. You clearly show promise.
Hi,
I really appreciate this post, because it’s really an impressive work. You provide useful information.
Really it provided me some unknown information and sure I accept that in reading blogs helps us to gather some good information for all the topics which improves our knowledge. Thank you.
Posted by: custom research papers | 03 February 2010 at 01:45 PM
A kind of supreme story just about this good topic. That is worth to notice the custom writing service that would like to make the essay writing or custom writing. Thus people could choose essay writers.
Posted by: MiaMH | 21 January 2010 at 06:16 PM
Bill, I have to confess to not really understanding your reply.
You write: "The data above rebuts the idea that there is a pro-coast bias." But not if the variables you're using themselves incorporate a coastal bias, right? That's what I was worried about.
You write: "Here is one plausible mechanism: The larger firms have first hand knowledge of (a) national schools, (b) schools in their region. Some of the regional schools on the coasts get very low marks from practitioners. In contrast, scores of similar quality/characteristics in the South or Mid-west are left blank on the U.S. News questionnaire. Since a disproportionate number of practitioners are on the east coast (in my data, 48% of all Am Law 200 lawyers in the U.S. are in the NE/Mid-Atlantic corridor), east coast law schools, after functionally controlling for rank (through LSAT and Acad Rep), fare poorly." If I understand you, this would show only that the less prestigious East Coast schools suffer; it may still be that the better-known East Coast schools get a big boost as against their non-East Coast competitors, just because they're "known quantities." Is that consistent with your data?
You write: "in general, it is more defensible to survey more lawyers in regions where there are, in fact, more lawyers." Doesn't it depend on what it is you think students want to know? Most law schools, even most top law schools, place regionally, therefore the regional reputation by lawyers who actually know something about the schools is probably most important...which would favor, wouldn't it, a geographically diverse pool of respondents?
You write: "Any survey sample that is non-random contains some implicit or explicit assumptions." So does a non-random set of responses--e.g., one that is geographically skewed--even if it was no one's intention.
Posted by: Brian Leiter | 09 April 2007 at 12:06 PM
Very cool. Thanks Bill.
Posted by: Mike Guttentag | 08 April 2007 at 04:56 PM
Mike,
Removing Academic Reputation does not change the coefficients or significance of the coastal variables in any meaningful way. Acad Rep and 75th Percentile LSAT are highly colinear; when Acad Rep is deleted, 75th percentile LSAT is a highly significant predictor (p <.000). And alas, the results are essentially identical; there is no pro-coast bias in the Lawyer/Judge score. If anything, it is anti-coast for the east coast.
I included both Acad Rep and LSAT because, no doubt, years of rankings affect how respondents rank schools. Acad Rep and LSAT effectively control that influence so we can test for a coastal effect while (functionally) holding US News rank constant. bh.
Posted by: William Henderson | 08 April 2007 at 02:37 PM
Bill. Is the negative coefficient for “coastness” in explaining lawyer/judge rankings driven by the fact that you have included academic reputation as an independent variable? It seems to me that your regression only shows that the anti-coastal bias is greater for the lawyer/judge rankings than for academic reputation, not that there is an anti-coast bias for the lawyer/judge ranking. I’m guessing that if you remove the academic reputation as an independent variable the lawyer/judge rankings will show a pro-coast bias. Is this correct?
Posted by: Mike Guttentag | 08 April 2007 at 01:17 PM
Brian, this is a good question.
The data above rebuts the idea that there is a pro-coast bias. But none of us has given much thought to a possible negative coastal bias. If it exists, what the causal mechanism? A separate question is whether the geographic concentration is "biased" in a way that makes the Lawyer/Judge score less useful to prospective students.
I agree that the practitioner component of US News is geographically skewed. They rely upon responses from hiring partners at larger firms, which are disproportionately on the coasts. The judges are more likely to be geographically dispersed (it is my understanding that only federal district court judges are surveyed).
So how would this play out in the results? Here is one plausible mechanism: The larger firms have first hand knowledge of (a) national schools, (b) schools in their region. Some of the regional schools on the coasts get very low marks from practitioners. In contrast, scores of similar quality/characteristics in the South or Mid-west are left blank on the U.S. News questionnaire. Since a disproportionate number of practitioners are on the east coast (in my data, 48% of all Am Law 200 lawyers in the U.S. are in the NE/Mid-Atlantic corridor), east coast law schools, after functionally controlling for rank (through LSAT and Acad Rep), fare poorly.
Another way of saying the same thing: outside the coasts, judges probably have more influence on US New Lawyer/Judge reputation score.
A separate question is whether U.S. News Lawyer/Judge survey methodology is bias in its sample of respondents. It would probably be a good idea to have separate "hiring partner" survey and "judge" surveys. But in general, it is more defensible to survey more lawyers in regions where there are, in fact, more lawyers. Moreover, the firms represented in the U.S. News survey (a) actually hire a large proportion of law school graduates, and (b) pay higher salaries. Many prospective students would be interested in the views of these lawyers.
Any survey sample that is non-random contains some implicit or explicit assumptions. Perhaps a cleaner place to start is to articulate and defend a sampling strategy for practitioners. A good argument can be made that a random sample from Martindale-Hubbell, for example, would (a) not be geographically biased, but (b) not be very useful to students applying to law school.
Posted by: William Henderson | 08 April 2007 at 09:45 AM
How does this work as an analysis if your variables themselves include coastal biases? (This is a real question, I'm curious.)
We know that, because of the criteria for receiving the survey (which are related to office size), the surveys are sent disproportionately to lawyers on the two coasts, as against, say, lawyers in Oklahoma or Kansas. So for that reason alone, the results are skewed a bit (how much, we can't say) to lawyers on the coasts and, more generally, lawyers in "big firm" centers generally (so including, e.g., Chicago and Houston).
Posted by: Brian | 08 April 2007 at 07:35 AM
Michael,
The VAP issue might be a bit different, for a couple of reasons. First, and most relevant to your point, many schools have a policy against hiring their own VAPs and Legal Writing Instructors. This is not exactly a grass-is-greener point, though. It is more of a precommitment strategy to avoid a negative connotation for the VAP and to prevent friend bias from affecting the hiring decision. Second, there is a selection bias for VAPs. Many of them took VAP positions because they couldn't get a tenure-track position. Furthermore, during their VAP years, some VAPs prove they can't write or don't teach well or don't have great ideas. Thus, VAPs as a whole may not be as strong a group as the general entry-level pool.
Posted by: anon | 07 April 2007 at 03:51 PM
If this analysis holds, the implications are interesting. If the primary surveys are on the coasts, it may imply that the top students at regional schools (those who find national jobs) get better rankings than the mass of students from coastal schools who stay local to work.
In other words, the unknown ranks higher than the known.
This is not an unheard of phenomenon for schools -- my sister is a podiatrist, and she found that she (and all of her classmates) fared better in the residency match if they hadn't actually done an internship at a hospital. For example, she wanted the VA in Denver (where she had worked), but got the VA in Palo Alto (where she hadn't). Her friend wanted the VA in Palo Alto (where she had worked), but got the VA in Denver (where she had not).
I wonder if the same is true of AALS hiring- how many VAPs get offers where they VAP?
Posted by: Michael Risch | 07 April 2007 at 09:11 AM
Steve, I coded NJ east coast, but neglected to list them above (I have corrected now). Northern NJ is also part of metro NYC, so my comment above covers Seton Hall and Rutgers Newark. Good catch. Thanks, bh.
Posted by: William Henderson | 06 April 2007 at 09:08 PM
And what of New Jersey (which does not appear in your list of "east coast" schools)? Two of the three NJ schools might also qualify as NYC area schools (although the New Yorkers would probably contest the point).
Posted by: Stephen J. Lubben | 06 April 2007 at 08:07 PM
Anon,
Great question. When I swap out East Coast and use New York metro law schools, the coefficient is once again negative and the p-value is .018. In fact, all the major coastal markets have a negative sign. Only NYC is negative at p< .05. bh.
Posted by: William Henderson | 06 April 2007 at 05:55 PM
Isn't the typical bias claim centered around NYC specifically rather than the entire coasts? In other words, Fordham does better than, say, Indiana-Bloomington on lawyer/judge rep (despite tying on academic rep) because more of the lawyer respondents are in NYC. Perhaps you could isolate that in the next go-around.
Posted by: anon | 06 April 2007 at 05:41 PM
Nice work, Bill (and kudos to Jeff for getting the discussion moving).
Posted by: Michael Heise | 06 April 2007 at 05:28 PM