« Northwestern IRB Conference | Main | Guest Blogger: Andrew Morriss »

April 09, 2006

Comments

Sean Wilson

... when my chicago paper is done, I will send it to you. Please look at it. I attempt to address these issues. I do not think the argument against career numbers is very sound. The paper is very objective. I think you and Jeff would agree with the analysis. I feel like the data is ruling my conclusions more than anything else. I think I have also had a bit of a breakthrough in my thinking about this subject.

Sara Benesh

I understand that career scores are predictive, but what caused those votes in the first place? Using them to predict votes would, of course, result in circularity. When it's necessary to use ideology to help predict votes, what do we use?

Sean Wilson

Sara:

First, I think all the issues of how to measure ideology and law are first taken care of by clarifying what one means when the terms are deployed in sentences. Second, I have a methodological paper that I am delivering in Chicago that quite clearly demonstrates the superiority of career ratings to segal cover scores in every area of decision making imaginable. If by "ideology" one means propensity for direction, the career ratings are the best spatial placement of propensity for direction across two dimensional space. Moreover, they prevent the ecological bivariate ideological model from losing two-thirds of its explanatory value once it is modeled correctly. Instead, the fit of the model is only cut by half. In short, many of these issues become improved with clarity as to what you talking about (propensity for direction) and properly accounting for their relative influence (good, but not necessarily governing).

Tracy Lightcap

A quick comment (great way to avoid grading mid-terms).

I'm not sure the gap between legal scholars and quantitative poli sci types can be bridged on these issues. The law profs Sara mentions are right; from their perspective, classifying either judges or case outcomes as liberal or conservative does "lack nuance". But that's because studying the law is a largely rhetorical exercise involving the parsing of decisions in line with discourse constraints. The research done in political science is concerned with patterns of gross empirical effects. The two lines of inquiry spend a lot of time talking past each other as a result. I'm not sure that any measurement strategy can overcome these differences; indeed, what seems to usually happen instead is that interested scholars switch sides, exactly what you would expect when paradigmatic differences are involved.

What might be a better idea is to change the analytical methods used. An adaptation of the "analytical narrative" approach might work to bring the two sides together; at least, there is ample scope there for combining the "nuance" of legal analysis with quantitative work. Once we get both sides focused on a similar approach, some of the dissatisfactions might disappear.

Btw, since there is an excellent chance that someone has already done something like this and I've missed it, I'd be interested in seeing any citations. Ok, I could do the research myself, but I've got all those mid-terms ...

Sara Benesh

Darren Schreiber (UCSD) sent an informative article on this type of "path dependency" in research from the sciences. It's worth a look. Here's the abstract:

Andrey Rzhetsky, Ivan Iossifov, Ji Meng Loh, and Kevin P. White. 2006. "Microparadigms: Chains of collective reasoning in publications about
molecular interactions" PNAS 103(13):4940-4945 (Online at http://dx.doi.org/10.1073/pnas.0600591103).

Abstract: We analyzed a very large set of molecular interactions that had been
derived automatically from biological texts. We found that published
statements, regardless of their verity, tend to interfere with interpretation of the subsequent experiments and, therefore, can act as scientific "microparadigms," similar to dominant scientific theories [Kuhn, T. S. (1996) The Structure of Scientific Revolutions Univ. Chicago Press, Chicago)]. Using statistical tools, we measured the strength of the influence of a single published statement on subsequent interpretations. We call these measured values the momentums of the published statements and treat separately the majority and minority of conflicting statements about the same molecular event. Our results indicate that, when building biological models based on published experimental data, we may have to treat the
data as highly dependent-ordered sequences of statements (i.e., chains of collective reasoning) rather than unordered and independent experimental observations. Furthermore, our computations indicate that our data set can be interpreted in two very different ways (two "alternative universes"): one is an "optimists’ universe" with a very low incidence of false results (90%). Our computations deem highly unlikely any milder intermediate explanation between these two extremes.

Eileen Braman

First Sara – thanks for picking up the tread. Second it just occurred to me that part of the reason for the “path dependence” in political science maybe because we are actually MORE adversarial in the initial knowledge building stage (because of the peer review process).

To inject some humor into this otherwise serious topic it’s like the David Letterman bit “Is That Anything?” if two out of three anonymous reviewers agree its “something” then we can conclude that we’ve added to our overall knowledge – if not back to the drawing board (or at the very least another journal). Maybe operationalizations that make it though this sometimes brutal process deserve such deference?? Just seems like sometimes relying on them can curb/inhibit original thought.

The comments to this entry are closed.

Conferences

April 2014

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      

Site Meter


Creative Commons License


  • Creative Commons License
Blog powered by Typepad