Andrew Gelman (Columbia--Statistics) has an interesting post about a nettlesome little topic -- How to make sense of an independent variable that "flips" signs (that is, goes from negative to positive or vice-versa) across model specifications. This phenomena, while unusual, typically arises when control variables are added to a model. Gelman's post includes helpful links to examples in the literature and has attracted insightful comments.
Though most of our stats software package-related posts skew towards Stata, we remain mindful that folks' tastes vary across the most popular packages (e.g., Stata, SPSS, R, SAS). With this in mind, we recently stumbled across a helpful and quite user-friendly SPSS resource. Mounted by SPSS' new owner (IBM), the "Case Studies" tab provides hands-on examples of how to perform various types of
statistical analyses and interpret the results. The site walks users through (using quite helpful "screen-shot" slides) a suite of statistical tests found in SPSS' various stats packages. (For one example, factor analysis is explained here.) Worth a look for those using SPSS.
A workshop, co-sponsored by the Engelberg Center on Innovation Law
and Policy at New York University School of Law and the United States Patent
and Trademark Office in cooperation with the Center for Law & Economics,
ETH Zurich, the Oxford Intellectual Property Research Centre, and the Centre
for Competition Policy (UEA), seeks paper proposals from "economics, management, and legal scholars on the
empirical study of trademark data." The workshop seeks to "support
better scholarship in this embryonic area of research and lead to the
publication of high quality and high impact studies."
The workshop is scheduled for September 26-27, 2013, at the U.S. Patent and Trademark Office in Alexandria,
Paper proposals are due by July 1, 2013, and should be sent to: email@example.com.
For further information contact: Prof. Barton Beebe: firstname.lastname@example.org or Alan Marco: email@example.com.
legal scholarship has become something of a sport for many, including federal
judges. Chief Justice Roberts, for example, recently opined that "because
law review articles are not of interest to the bench," he has trouble
remembering the last law review article he read.
David Schwartz (Chicago-Kent) and Lee Petherbridge (Loyola-LA) subject the general claim to data. In a series of papers the authors present findings on when an opinion (majority, dissent,
or concurrence) cites to legal scholarship in the U.S. Supreme Court, Courts
of Appeals, and Federal Circuit. For Supreme Court opinions, the authors find that legal scholarship citations "sharply vary across different
types of legal issues." Click here for a quick summary of the papers (and the data sets).
Bob Lawless (Ill.) asked me to post the following Call for Papers for the 2013 MLEA, and I am delighted to do so. University of Illinois College of Law will host the conference and the deadline for proposals is August 1, 2013. A brief description of the conference follows (more details here).
"The University of Illinois College of Law and the Illinois Program on Law, Behavior & Social Science are hosting the Thirteenth Annual Meeting of the Midwest Law & Economics Association on October 11 & 12, 2013 in Champaign, Illinois. To participate, you need not be a Midwestern economist or even an economist or a Midwesterner. The event consists of law professors and economists presenting papers with varying degrees of law-and-economics content, ranging from empirical analyses and formal economic modeling to legal philosophy and doctrinal papers infused with economic thinking. Presentations will begin Friday morning and end early- to mid-afternoon on Saturday."
A recent post by David Schwartz (Chicago-Kent)--wondering whether empirical legal scholars should shoulder "special ethical responsibilities"--ignited a fascinating (and timely) discussion over at Concurring Opinions. Two reasons prompt Schwartz's concerns. "First, nearly all law
reviews lack formal peer review. The lack of peer review potentially
permits dubious data to be reported without differentiation alongside
quality data. Second, empirical legal scholarship has the potential to
be extremely influential on policy debates because it provides 'data' to
substantiate or refute claims. Unfortunately, many
consumers of empirical legal scholarship — including other legal
scholars, practitioners, judges, the media, and policy makers — are not
sophisticated in empirical methods."
Schwartz's concern focuses on what he calls "weak data." By that he means "reporting [results from] data that encourages weak or flawed inferences, that is not
statistically significant, or that is of extremely limited value and
thus may be misused." Specifically, "[t]he precise question I
have been considering is under what circumstances one should report weak
data, even with an appropriate explanation of the methodology used and
its potential limitations."
Whether you agree with Schwartz or not, he raises an important question that warrants attention.
Responding to the recent debacle involving a grad student uncovering a blundering error in a paper by noted Harvard economists (here), Betsey Stevenson (Mich.) and Justin Wolfers (Mich.) initiated a (now growing) list of suggestions on how to minimize errors in empirical research. Not surprisingly, others, including Andrew Gelman (Columbia), have added to the list (here). While the list will inevitably grow, it already includes basic, helpful reminders for even the most experienced researchers.
Over at PrawfsBlawg, John Pfaff (Fordham) provides a cautionary reminder that most empiricists cannot hear often enough. Re-defining key variables can influence results and re-defined variables are frequently difficult to detect, particularly in large, complex, longitudinal datasets. That is, too often, secondary analyses are undertaken without necessary due-diligence involving the underlying data. Among Pfaff's take-aways:
else, this is a strong warning against casually running empirical models, a
growing problem in legal scholarship. Legal academics shouldn’t just get their
IT departments to install Stata on their computers, download some data, and
then start running some regressions. It
can take years to fully understand what a dataset looks like, what it is
really measuring, its strengths and weaknesses. People who just run some quick
regressions and then send them off to a law review are likely moving knowledge
backwards, not forwards, since the risk of bad results is too great."
Information on the 8th Annual CELS 2013, co-sponsored by the Society for Empirical Legal Studies (SELS) and Penn Law School (and organized this year by David Abrams, Ted Ruger, and Tess Wilkinson-Ryan), is now available (here). The 2013 CELS will take place at Penn Law School, from October 25-26. In addition, due to the success of last year's workshop and growing demand, a one-day, "hands-on" CELS empirical training workshop will also be offered and is set for October 24-25.
A recent news story underscores the importance of basic replication (as well as scholarly attention to detail) for empiricists.
"His [Thomas Herndon's] professors at the University of
Massachusetts-Amherst had set his graduate class an assignment--pick an
economics paper and see if you can replicate the results. It's a good
exercise for aspiring researchers. Thomas chose Growth in a Time of Debt. It was getting a lot of
attention, but intuitively, he says, he was dubious about its findings."
Turns out that the grad student's intuition was dead-on as core results from the influential economics article--authored by two leading Harvard economists--could not be replicated. Herndon's replication efforts uncovered a basic
error in the spreadsheet. "The Harvard professors had accidentally only
included 15 of the 20 countries under analysis in their key calculation
(of average GDP growth in countries with high public debt). Australia, Austria, Belgium, Canada and Denmark were missing." In addition, other data for some countries were missing