« Theory, Context and Common Sense | Main | ELS Past, Present and Future »

10 August 2006

Comments

Tracy Lightcap

You're right. This is nothing much new, but it always bears repeating.

One caveat, however: you are treating with significance tests in the context of random samples. That is not the only way to make stat inferences or even the best one. The use of significance tests becomes more important and - a bonus - more understandable if they are generated by resampling techniques. Bootstrapped estimates are more directly relevant to interpreting the stability of substantive estimates. The jacknife is also handy in some situations, though the widespread adoption of resampling in stat programs makes it less necessary.

Other than that: spot on.

Richard Lempert

A Further Thought:
One should also note that while significance tests should not be used to measure the importance or policy relevance of relationships, measures of the magnitude of effects may also be misinterpreted or misused. For example, data transformations, such as those commonly used in studies of the deterrent effects of capital punishment, mean that coefficients are properly interpreted as elasticities, and reports of what has been found typically report the tradeoffs these elasticities imply. Thus Ehrlich reported that in his data each additional execution appeared to prevent seven or eight homicides. Assume there were no problems with the data or analysis, and this estimate was correct. It would still not necessarily mean that if policies were changed to execute more people, each additional execution would save 8 lives. The effect found we know to hold true only over the range of cases studied. Marginal effects caused by policy changes could be very different. Thus, the trade-off found cannot be magically translated into ideal policy even when there is substantial value consensus. Doubling the execution rate in a state might diminish the homicide rate substantially less than a study of the status quo ante implies, and it would increase the risk that an innocent person would be executed. A policy maker who values both increased deterrence and low risk to the innocent might find the trade-off at the margin unacceptable even if he were willing to pay the cost of increased risk to the innocent if for each execuition eight lives were saved . This does not mean that the hypothesized research would not support policy change, but policy change based on research about past states should be the occasion for continuing research on what is happening in the changed world and not a marker that we now know what we needed to know to devise wise policy and so should spend our research money on other problems.

One must also be cautious of measures of effect because researchers in presenting their work naturally emphasize the effects of the variables whose impacts they are focusing on. Yet for the policy maker the effects of control variables may be just as important, and perhaps should shape the policy implications drawn from the effects of the focal variables. Suppose, for example, that a student of capital punishment found that each additional execution seems to translate into 8 fewer homicides but each year beyond 8th grade that the average student stays in school seems to translate into 20 fewer homicides. Is it either wise or moral in these circumstances to invest political and monetary capital in increasing execution rates rather than in programs designed to keep children in school? The policy maker should, and should want to, consider this question. But the effects of policy-relevant control variables may not be evident from the reported research, either because information on the effects of control variables are not given or are given only in tabular form and not highlighted in that abstracts, executive summaries or conclusions that grab our attention.

A final xample involves the intepretation of logistic regressions. Because it is difficult to intuit the implications of logistic regression coefficients, information is often presented on their effects at the mean values of the other independent variables. In a particular jurisdiction, however, all other variables are unlikely to be be at their mean, and the implications of changes made in response to the research may, even on the model's own terms, be quite different from what the presentation of results suggests.

The implications I draw from what i have said is that policy makers must be cautious in drawing conclusions about the likely effectiveness of policy changes from even well-conducted quantitative reserach and even when variables that suggest change appear to be both statistically significant and substantively important. But these cautions are decidedly NOT a call for refraining from social science research, ignoring social science findings or preferring softer over harder data. Rather they are a call for the careful and sophisticated interpretation of what research does and does not tell us and for treating research on important policy-relevant issues as an ongoing project in which findings from past studies should be continually updated and where there will almost always be more that is relevant to be learned.

The comments to this entry are closed.

Conferences

April 2025

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      

Site Meter


Creative Commons License


  • Creative Commons License
Blog powered by Typepad