Over at Balkinization Brian Tamanaha (Wash U) posted graphs endeavoring to describe law student enrollment and BLS data on legal employment trends (2001-09). Visually, the two graphs support Brian's assessment: "Law schools thus responded to the worst recession in the legal market in at least two decades by letting in more law students." As Brian Leiter (Chicago) notes, however, "One can't tell, though, from the second chart [and the underlying BLS data] what portion of the downturn in 'legal employment' is a reduction in the employment of attorneys as opposed to other law-related employees."
In Child Support Guidelines and Divorce Rates, Margaret Brinig (Notre Dame) and Douglas Allen (Simon Fraser) draw on a large dataset (National Longitudinal Survey of Youth (NLSY)) to assess how variations in child support guidelines influence decisions to divorce. The paper's abstract follows.
child support guideline is a formula used to calculate support payments
based on a few family characteristics. Guidelines began replacing court
awarded support payments in the late 1970s and early 1980s, and were
later mandated by the federal government in 1988. Two fundamentally
different types of guidelines are used: percentage of obligor income,
and income shares models. This paper explores the incentives to divorce
under the two schemes, and uses the NLSY data set to test the key
predictions. We find that percentage of obligor income models are
destabilizing for families with high incomes. This may explain why
several states have converted from obligor to income share models, and
it provides a subtle lesson to the no-fault divorce debate."
ICPSR has just made available Wave 1 of the After the JD database. From the study description:
The After the JD project is designed to be a longitudinal study, seeking
to follow a sample of approximately 10 percent of all the individuals
who became lawyers in the year of 2000. It is the largest and most
ambitious study ever undertaken by researchers of legal careers aiming
to track the professional lives of more than 5,000 lawyers during their
first 10 years after law school.
Ronen Avraham (Texas) has updated a tremendously useful resource for those assessing the impacts of various tort reforms. In the Database of State Tort Law Reforms (3rd), Ronen sets out to establish "one 'canonized' dataset will increase our understanding of tort reform’s impacts on our lives." Direct access to the dataset (in Excel) is found here. A more complete description of the project follows:
"... DSTLR (3rd) updates the DSTLR (2nd) and contains the most detailed, complete and comprehensive legal dataset of the most prevalent tort reforms in the United States between 1980 and 2008. ... The dataset records state laws in all fifty states and the District of Columbia over the last several decades. For each reform we record the effective date, a short description of the reform, whether or not the jury is allowed to know about the reform, whether the reform was upheld or struck down by the states’ courts, as well as whether it was amended by the state legislator."
Jeff Yates (SUNY Binghamton--Poli Sci) ignited an important discussion (here) about data sharing norms and possible variations across fields. Jeff notes:
"In political science there are very strong professional mores to share
data with other researchers. In fact, it is usually expected
immediately after publication of your first article using the data if
not before that time (e.g. after presenting a working paper at a
conference). I profess some ignorance of the social mores on data
sharing in empirical legal studies, but from my few conversations on
this point, I think that they might be somewhat different."
Setting aside scholarly norms (or, better yet, aspirations), Jeff also wonders whether IP doctrine tugs norms for empirical legal scholars.
Good news -- the new NSF-funded Supreme Court Database Website is now up and running, and I can't imagine this won't make the database available to thousands more users via its easy-to-use interface. Not only can you download the most current version of the database and its companions, but you can also perform analyses right on the website. This is good stuff; check it out!
With a hat tip to the Monkey Cage, I thought folks might be interested in taking a look at this paper. I know my fellow editor Carolyn has done some research on the Spaeth data as well -- perhaps she'll weigh in on this research? (FYI, I have not yet read the Harvey/Woodruff paper.)
here. Courtesy of Profs. Lee Epstein, Andrew Martin, Jeff Segal, and Chad Westerland. JCS attempt to provide preference estimates for Supreme Court justices that are directly comparable to preference measures of Courts of Appeals judges, members of Congress, and the President. Our data extend through 2008 and correspond with the most recent version of Keith Poole's Common Space scores.
Thanks to the Empirical Legal Studies crew for letting me guest blog here. Consistent with the motto of the blog, I hope to use my time with ELS to bring some method to the madness that seems likely to swirl around Sonia Sotomayor’s confirmation hearings this summer.
For the past two years I have been compiling a dataset capturing all of the statements made by senators and Supreme Court nominees at the confirmation hearings held by the Senate Judiciary Committee.The dataset starts with the Felix Frankfurter’s hearing and ends with Justice Alito’s. I have coded by issue area all statements made by the nominees and senators. Each unit of analysis includes information about the political party of the questioning senator, the appointing president, and the committee chair. The dataset also captures instances where a court case was discussed by name, and includes variables indicating whether the statement being coded addressed constitutional interpretation, statutory interpretation, or federalism.
I hope to use this data to provide real-time commentary on the confirmation hearings this summer.Because Supreme Court vacancies occur so infrequently, our discussion of the confirmation process often lacks historical perspective. I hope this data can help rectify that, by bringing concrete information to conversations about what has – and has not – been “the norm” in this process.
I also hope that the ELS community will provide feedback about how to use and improve this dataset. This is in many ways a ‘test run’ of the data, so I look forward to hearing your comments about it. I’m also open to suggestions regarding what type of information you think would be interesting to pull out of the dataset.
I just finished posting some data and documentation on the Dataverse Network. This is a wonderful -- and remarkably easy to use -- resource that provides scholars with a virtual archive for their data. It allows you to make your data publicly available in a permanent format with a stable URL -- useful if, for example, you change institutions. You can even incorporate links to your data into your own website, branding the material as your own, without having to worry about hosting and maintaining it. If other scholars use or rely on your data, there is a formal citation that they can use, giving you appropriate scholarly credit.
And of course the Dataverse is a great resource for scholars who might want to replicate other people's work or rely on their data. Thanks to Harvard's Institute for Quantitative Social Science for creating and making available this resource. You can read more about it here, in a post by Gary King at the Social Science Statistics Blog.
A couple of years ago, Paul Caron flagged an obscure ABA rule change
that required law schools to report the highest LSAT score of an
admitted student versus the prior practice of averaging. Paul and
Moneylaw blogger Tom Bell
foresaw a likely surge in the number of repeat test-takers--but without any appreciable benefit. Repeat score are, in fact, less accurate in predicting
law school grades -- though, let's not kid ourselves, law schools are
not looking at the LSAT anymore for predicting 1L performance. It also
costs a lot of time and money. And if upper class white kids are
better able to afford test preparation courses, it is likely to
exacerbate the racial performance gap.
Well, the data is in for the October 2008 cycle. It is indisputable
that applicants are figuring out the implications of a low LSAT score
in the US rankings era (fewer admissions letters, few scholarship
dollars) and the potential upside/no downside of taking the exam again.
Despite a -1.7% drop in first-time takers, the repeater volume is up
16.8%. In the Northeast, where the positional competition is the most
intense, there has been a staggering 33.7% increase.
From an individual perspective, I know it makes sense to take the
test a second or third time. Indeed, it is comforting to many to have
that option. But in the aggregate, this policy really just opens the
door for a protracted zero sum game. If law schools were making better
decisions because of the second or third scores, the additional time
and expense could be justified. The second scores, however, are less
reliable than the first and second combined.
Understanding these dynamics, the regulator (the ABA Section on
Legal Education and Admission to the Bar) is supposed to set rules that
are in the best interests of the students and the profession -- not law
schools or testing agencies. Sam Stonefield of Western New England Law wrote a very detailed objection to this policy. He was prescient. I appreciate the time he took to spell it all out.