As I have neither an interest in nor inclination to engage substantively with an on-going scholarly "dust-up," I am content to let two papers that recently "crossed my desk" speak for themselves (compare Church & Williams with Baker & Bradt). That said, and independent of this exchange in particular and multidistrict litigation in general, a few points raised by Baker (Texas) and Bradt (Berkeley), in Anecdotes Versus Data in the Search for Truth about Multidistrict Litigation, have broader applicability.
One goal of empirical research, generally stated, is to generate results that are persuasively and adequately scaffolded by the underlying data and research design. Whether a "convenience sample" eliciting self-reports from an on-line survey can reflect an unbiased, random, and representative draw from an underlying population universe is not a given and requires further analyses. That is, for such a data-generating strategy the specter of selection bias lurks and must be addressed.
Moreover, even if one is comfortable that neither selection bias nor other data shortcomings pose methodological problems, additional research design issues persist. To be more specific, and as Baker and Bradt note, if a research question seeks to meaningfully engage with, e.g., plaintiffs' satisfaction with MDL litigation experience, data on respondents' (that is, plaintiffs in MDL litigation) satisfaction (or dissatisfaction) should be compared with an otherwise equally representative and unbiased sample of plaintiffs from non-MDL cases. Without an appropriate control group, the results lack a necessary reference point that facilitates interpreting the results.
Recent Comments