Earlier this week, Howard Gillman wrote a terrific post that reminded us that within the broader academy, "empirical" denotes a lot more than just quantitative analysis. Howard's comments harkened me back to earlier posts by law professor bloggers who asked what traction they could get from qualitative work. (For example, Lior Strahilevitz on "Big Tent" Empiricism; Dan Filler on Qualitative Empirical Legal Research; and Victor Fleischer on Case Studies.)
In turn, I ran across this short essay by economist Susan Helper, which is directly on point: "Economists and Field Research: 'You Can Learn a Lot Just by Watching," [JSTOR] 90 Am. Econ. Rev. 228 (2000). [The title quotes the inimitable Yogi Berra.] Helper's first few paragraphs put things into perspective:
Modern economics began with Adam Smith's visit to a pin factory, which helped him explain how the division of labor worked. However, not many economists today do much fieldwork, which involves interviews with economic actors and visits to places they live and work. ...
Economists today typically do their research using econometrics and mathematical modeling. These techniquess have many strengths but share the weakness of distance from individual economic actors. In contrasts, field research allows direct contact with them, yield several advantages.
1. Researchers Can Ask People Directly About Their Objectives and Constraints. ... [examples]
2. Fieldwork Allows Exploration of Areas with Little Preexisting Data or Theory. ... [examples]
3. Fieldwork Facilitates Use of the Right Data [by discovering otherwise "unobserved" environmental differences]. ... [examples]
4. Fieldwork Provides Vivid Images That Promote Intuition. ... [examples]
The rest of the essay argues that standard critiques of qualitative research (i.e., it's not objective, amenable to replication, or generalizable) can be addressed through better methods. [Disclosure: as an undergraduate, I worked as an RA for Helper doing, among other things, field research in the automotive industry.]
A few years ago, Sue Helper told me about a conversation she had with Ronald Coase. In a nutshell, Coase claimed that the neglected takeaway from this famous 1960 article was that economists should be doing qualitative research of companies and institutions in order to better understand transaction costs, since they exist in almost all contexts and often produce less-than-optimal outcomes. That anecdote has stuck with me for a long time.
This weekend I am doing my own field research for my Law Firms class by attending the Indiana State Bar Association Solo & Small Firm Conference. (Note, if the IRB people ask, I am getting CLE credit; I am also an ISBA program organizer.)
Perhaps the moral of all of this is something like the following. Empirical rigor does not attach to any particular method or approach. Rather, empirical rigor is either applying an accepted methodology correctly, or developing novel approaches in ways that clearly add to the body of knowledge. The fundamental question of all empirical research is does the method help us better understand some problem in the world. If it does, that is all the rigor we should demand.
It is highly probable that virtually all methods that social scientists use have the potential to make at least some contribution to most problems in the world. We can have a serious debate over the merits of particular works, about where the most important work is being done at present, and whether particular methods are particular good at helping us understand particular problems in the world. Still, it is highly doubtful that any analysis can truly be said to be rigorous that is oblivious or does not engage with the findings of those using other approaches. While none of us can be experts on everything, we owe it to ourselves, our audiences and our students to be an least moderately intelligent consumers of work done from a variety of perspectives. Indeed, I think the academic world would be much better if we identified ourselves with the particular problems in the world we are studying than with our preferred method for doing that studying.
Posted by: Mark A. Graber | 03 June 2006 at 08:52 PM
Well, I don't really disagree. And I may be talking apples to your oranges. While I have some familiarity with the polisci qualitative research, I am more familiar with that by legal academics. And I think a great deal of that suffers from the subjectivity problem I talk about.
Perhaps my concerns do not map the qualitative/quantitative dimension so much as the training of the researchers.
Posted by: frank cross | 03 June 2006 at 02:58 PM
Chris is exactly right. Some questions are "better answered using a [non-quantitative] approach." On the other hand, if the question lends itself to reliable, quantitative coding, then it is certainly better to do that.
Posted by: Howard Gillman | 03 June 2006 at 01:53 PM
Quantitative research is not inferior to non-quantitative research, and vice versa. Darwin was at least as good a scientist as, oh, Poole and Rosenthal. And if we agree, then let's all get out of the habit of thinking that quantitative methods are objective while (e.g.) case analysis is subjective (which was the original point of Mark's that I was defending). Counting doesn't make something true; interpreting (the inevitable core of ALL research) doesn't make it subjective. Methods are tools. They can be used well or poorly. And the choice of tools should be driven by the questions, not vice versa.
Posted by: Howard Gillman | 03 June 2006 at 01:45 PM
I'll attempt to split this baby. It's important at the outset to distinguish between what are commonly called "qualitative" methods (and which, per HG, are probably better called "non-quantitative") and "interpretive" methods. Most interpretive approaches are qualitative, but not all qualitative methods are interpretive; the discussion here seems to center around non-interpretive, non-quantitative empirical approaches.
Done properly, statistics are nothing more than a set of regularized, mathematically-consistent tools for drawing inferences from (mostly, quantitative) data. In that sense, they are a particularly compact/efficient way of learning from facts/data. If the (quantitative) data are accurate in the sense discussed above -- that is, if they reliably and validly capture the important aspect(s) of reality in which we are interested -- then the internal consistency and compactness of statistics has a lot of potential value.
In this light, there's an important difference between data collected to assess whether or not Jackson's vote in Brown was consistent or inconsistent with his legal views, and data where all votes in favor of regulators are coded as "liberal." Either of these enterprises can be done well or badly, and so the conclusions drawn from those data might be right or wrong, as an empirical matter. But the latter data are clearly more likely to be reliable and valid, all else equal; there are regulators, and there are votes for and against them, and we can code them accordingly. Conversely, the former requires knowing something (Jackson's legal views) that -- at least circa 2006 -- we are unlikely to be able to know as easily and with the same level of certainty that we can say "the EPA (or whatever) is a regulator." Moreover, there is by the nature of the question less of those data: Jackson was one person, with one set of views, and he voted in Brown only once.
Any attempt to collect, code, and analyze quantitative data on the Jackson/Brown question, then, faces hurdles that would be tremendously difficult to overcome. In short, the question is simply one that is better answered using a qualitative approach; done well, I'd be much more inclined to believe a qualitative analysis on the point than one that attempted to use statistics. On the other hand, if the question (and the data to address it) are amenable to reliable, valid quantitative coding (as in whether a judge's vote was pro- or anti-regulator -- I'll leave aside the contention-fraught issue of whether such a vote is "liberal" or "conservative"), then the advantages that come with a more quantitative approach are very substantial (and, indeed, superior to qualitative ones, or at worst as good).
Posted by: Chris Zorn | 03 June 2006 at 01:40 PM
"On coding rules: coding rules are essential to large N data collection, but the presence or absence of coding rules has nothing to do with the accuracy of the classifications or the inferences drawn from those classifications."
I think I'm being unclear. Having an a priori coding rule may have something to do with accuracy insofar as it prevents subjectivity in classification. Moreover, having a transparent coding rule enables an external check on accuracy.
"On case selection: gerrymandering the data to promote preferred theories is a potential flaw of all research."
Well, not if the research uses random selection of cases or, again, transparently discloses its rules for case selection for external checking.
I'm not arguing that qualitative research has nothing to add or is inferior, just that it is complementary to quantitative research. Do you believe that quantitative research is not inferior and is complementary to qualitative?
Posted by: frank cross | 03 June 2006 at 01:31 PM
I agree that non-quantitative researchers should be as clear (transparent) as they/we can be in explaining the basis for their/our judgments.
On reproducibility: note that reproducibility might be essential to evaluating the accuracy of the data collection (e.g., whether other researchers also find 100 widgets when they look in the same place), but it has nothing to do with the accuracy of the inference (or theoretical significance) one associates with the data; that's a matter of interpretive persuasion. Saying "we coded all pro-agency decisions as liberal" allows others to reproduce the numbers but it doesn't make the result otherwise accurate (or "reliability does not imply validity").
On coding rules: coding rules are essential to large N data collection, but the presence or absence of coding rules has nothing to do with the accuracy of the classifications or the inferences drawn from those classifications.
On case selection: gerrymandering the data to promote preferred theories is a potential flaw of all research.
I think Frank's points about coding rules, reproducibility, and case selection have more to do with issues of generalizability than with interpretive accuracy in a give instance. To refer back to my first post: I think Klarman gives us enough information (and enough guidance to look at the information ourselves) to evaluate whether he is correct about whether the Brown justices were motivated by law or politics, but obviously that data alone tells us nothing about other cases.
Then again, a potential flaw with large N studies is that inferences we might support in cases 1-224 (about, for example, whether an opinion that contains the word X should be coded as pro-government) may seem less accurate in cases 225-267 (because our historical/interpretive sensibilities lead us to recognize differences we consider relevant). Large N counters often insist that the fact that classifications may be challenged in individual cases doesn't really matter, because the data will still uncover general trends; and sometimes we are persuaded, and sometimes we are not.
Posted by: Howard Gillman | 03 June 2006 at 01:20 PM
Ok, one other thought. I was musing about how qualitative research might benefit from theories of quantitative research. I just read a statutory interpretation study of securities law cases by Grundfest that reached certain findings in univariate models but found they were all wrong in a multivariate model. He concludes that the casual observer would be misled by "just watching" if failing to consider the interaction of all the variables. Thinking about control variables would be important in qualitative research. I've seen some very good qualitative analyses that do this but so many that do not.
Posted by: frank cross | 03 June 2006 at 12:57 PM
Well, I don't disagree with that insofar as it goes. However, quantitative research is (at least, should be) more transparent about how it makes these classifications, so that the methods can be replicated, and qualitative research often is not. To get graduate students to do coding, you need systematic rules, but researchers often think they don't need those rules themselves when making classifications. I think reproducibility goes hand in hand with accuracy.
Second, and most important is case selection. A potential flaw with qualitative research is "looking out over the crowd and selecting friends." Choosing cases that support the thesis and ignoring those that don't. Really good qualitative research won't do this, but it's hard for the reader to know if it's really good qualitative research. Now, this isn't inherent. Why doesn't qualitative research take a random sample of cases to analyze?
Posted by: frank cross | 03 June 2006 at 12:51 PM
Frank's comment "but it is in fact subjective" about Mark's throwaway line ("Objectivity is when you have a second year grad student code opinions as legal or conservative. Making the decision for yourself on the basis on intensive textual analysis is subjective and, hence, not really scientific.") strikes me as too dismissive, and inconsistent with the last sentence of his post.
Quantitative and qualitative scholars routinely make claims about how best to characterize legal texts (or judicial votes, etc.). If either a quantitative or non-quantitative scholar (hate the word "qualitative") simply says, "hey, it's just my opinion that this is how the text should be characterized; I'm not going to try to defend it with arguments/evidence" then that position is "subjective." However, if (as always happens in empirical work) either offers arguments/evidence then the characterization can be evaluated by others and it's no longer subjective. This applies EQUALLY whether one is saying something like "Jackson's vote in Brown was inconsistent with his legal views" or something like "we code all votes in favor of regulators as 'liberal'". In BOTH cases the strength/reliability of the characterization is a function of how persuasive the inferential/interpretive justification is -- it has nothing to do with whether there is a subsequent use of counting methods.
Frank says, "if it is possible to design an accurate quantitative test, that provides a vastly stronger basis for reaching a conclusion." But in most cases, the thing that gives us confidence that a "quantitative test" is "accurate" is our evaluation of how well the investigator justifies the inferences from the proposed data. But, you see, this is precisely the same standard for evaluating the "accuracy" of a non-quantitative characterization. And so there is nothing inherently more "objective" about quantitative tests. More to the point: in both cases, the work of "accuracy" is primarily produced by the persuasiveness of one's interpretation rather than the reproducibility of one's counting.
Frank puts it well: "qualitative understanding is essential to devising an accurate quantitative test." Essential -- right.
Posted by: Howard Gillman | 03 June 2006 at 12:01 PM
I see the two as perfectly complementary. I really don't think you can develop helpful theories about the law without doing the "watching" and qualitative analysis. I have seen too many economists try to impose models on the law that are extremely ill-fitting.
On the other hand, I feel very insecure about conclusions based on just qualitative claims. Mark Graber's article has a throwaway line dismissing those who criticize researcher case analysis as subjective. But it is in fact subjective. If it is possible to design an accurate quantitative test, that provides a vastly stronger basis for reaching a conclusion. But of course the qualitative understanding is essential to devising an accurate quantitative test.
Posted by: frank cross | 03 June 2006 at 11:11 AM