Good news -- the new NSF-funded Supreme Court Database Website is now up and running, and I can't imagine this won't make the database available to thousands more users via its easy-to-use interface. Not only can you download the most current version of the database and its companions, but you can also perform analyses right on the website. This is good stuff; check it out!
With a hat tip to the Monkey Cage, I thought folks might be interested in taking a look at this paper. I know my fellow editor Carolyn has done some research on the Spaeth data as well -- perhaps she'll weigh in on this research? (FYI, I have not yet read the Harvey/Woodruff paper.)
here. Courtesy of Profs. Lee Epstein, Andrew Martin, Jeff Segal, and Chad Westerland. JCS attempt to provide preference estimates for Supreme Court justices that are directly comparable to preference measures of Courts of Appeals judges, members of Congress, and the President. Our data extend through 2008 and correspond with the most recent version of Keith Poole's Common Space scores.
Thanks to the Empirical Legal Studies crew for letting me guest blog here. Consistent with the motto of the blog, I hope to use my time with ELS to bring some method to the madness that seems likely to swirl around Sonia Sotomayor’s confirmation hearings this summer.
For the past two years I have been compiling a dataset capturing all of the statements made by senators and Supreme Court nominees at the confirmation hearings held by the Senate Judiciary Committee.The dataset starts with the Felix Frankfurter’s hearing and ends with Justice Alito’s. I have coded by issue area all statements made by the nominees and senators. Each unit of analysis includes information about the political party of the questioning senator, the appointing president, and the committee chair. The dataset also captures instances where a court case was discussed by name, and includes variables indicating whether the statement being coded addressed constitutional interpretation, statutory interpretation, or federalism.
I hope to use this data to provide real-time commentary on the confirmation hearings this summer.Because Supreme Court vacancies occur so infrequently, our discussion of the confirmation process often lacks historical perspective. I hope this data can help rectify that, by bringing concrete information to conversations about what has – and has not – been “the norm” in this process.
I also hope that the ELS community will provide feedback about how to use and improve this dataset. This is in many ways a ‘test run’ of the data, so I look forward to hearing your comments about it. I’m also open to suggestions regarding what type of information you think would be interesting to pull out of the dataset.
I just finished posting some data and documentation on the Dataverse Network. This is a wonderful -- and remarkably easy to use -- resource that provides scholars with a virtual archive for their data. It allows you to make your data publicly available in a permanent format with a stable URL -- useful if, for example, you change institutions. You can even incorporate links to your data into your own website, branding the material as your own, without having to worry about hosting and maintaining it. If other scholars use or rely on your data, there is a formal citation that they can use, giving you appropriate scholarly credit.
And of course the Dataverse is a great resource for scholars who might want to replicate other people's work or rely on their data. Thanks to Harvard's Institute for Quantitative Social Science for creating and making available this resource. You can read more about it here, in a post by Gary King at the Social Science Statistics Blog.
A couple of years ago, Paul Caron flagged an obscure ABA rule change
that required law schools to report the highest LSAT score of an
admitted student versus the prior practice of averaging. Paul and
Moneylaw blogger Tom Bell
foresaw a likely surge in the number of repeat test-takers--but without any appreciable benefit. Repeat score are, in fact, less accurate in predicting
law school grades -- though, let's not kid ourselves, law schools are
not looking at the LSAT anymore for predicting 1L performance. It also
costs a lot of time and money. And if upper class white kids are
better able to afford test preparation courses, it is likely to
exacerbate the racial performance gap.
Well, the data is in for the October 2008 cycle. It is indisputable
that applicants are figuring out the implications of a low LSAT score
in the US rankings era (fewer admissions letters, few scholarship
dollars) and the potential upside/no downside of taking the exam again.
Despite a -1.7% drop in first-time takers, the repeater volume is up
16.8%. In the Northeast, where the positional competition is the most
intense, there has been a staggering 33.7% increase.
From an individual perspective, I know it makes sense to take the
test a second or third time. Indeed, it is comforting to many to have
that option. But in the aggregate, this policy really just opens the
door for a protracted zero sum game. If law schools were making better
decisions because of the second or third scores, the additional time
and expense could be justified. The second scores, however, are less
reliable than the first and second combined.
Understanding these dynamics, the regulator (the ABA Section on
Legal Education and Admission to the Bar) is supposed to set rules that
are in the best interests of the students and the profession -- not law
schools or testing agencies. Sam Stonefield of Western New England Law wrote a very detailed objection to this policy. He was prescient. I appreciate the time he took to spell it all out.
Supplementing one of the leading sources of data for state civil litigation activity in the United States, the Bureau of Justice Statistics (BJS) recently released a report, Civil Bench and Jury Trials in State Courts, 2005, which provides an important snapshot and illustrates litigation trends since 1992. The report discusses general civil cases (tort, contract, and real property) concluded by a bench or jury trial in a national sample of jurisdictions in 2005. Topics include the types of civil cases that proceed to trial, the differences between civil cases adjudicated by judges or juries, and the types of plaintiffs and defendants represented in civil trials. Also, the report covers plaintiffs. Key findings include:
In 2005 plaintiffs won in more than half (56%) of all general civil trials
concluded in state courts. The plaintiff was significantly more likely to win in
a bench trial compared to a jury trial. Among all plaintiff winners the median
final award was $28,000. Approximately 4% of all plaintiff winners won
$1,000,000 or more. Contract cases in general had higher median awards ($35,000)
than tort cases ($24,000).
The total number of civil trials declined by over 50% from 1992 to 2005 in
the nation’s 75 most populous counties. Tort cases decreased the least (40%)
while real property (77%) and contract (63%) cases registered the largest
In the nation's 75 most populous counties, some tort case categories have
seen marked increases in their median jury awards. This was particularly the
case for product liability trials, where the median awards were about 5 times
higher in 2005 than in 1992 and for medical malpractice trials, where the median
jury awards more than doubled from $280,000 in 1992 to $682,000 in 2005.