Drawing upon a large representative sample from 19 ABA-approved law schools, Ben Barton’s study found no statistically relationship between various measures of scholarly output and teaching effectiveness. If this result is correct, then much of the legal academy’s discourse on “law school quality” is fundamentally flawed.
Rather than let go of old and comfortable ideas, many law professors will look for a basis to dispute the study’s findings. As noted in Ben’s last post, the critique of his study that will probably enjoy the most currency is the claim that law students are incapable of evaluating effective law school teaching. As a result, teaching evaluations are an invalid measure of instructor quality.
In Ben’s defense, I believe he has ceded too much ground. There are at least three reasons why the critics of teaching evaluations are wrong.
First, law school teaching evaluations clearly pass any type of objective market test. Law students, who are vetted based on college performance and test scores, are a remarkably accomplished group who have decided to invest three years and large sums of tuition to enter the legal profession. Having bonded themselves to a specific career path, they have a strong economic interest in obtaining instruction that is (a) intellectually engaging, (b) increases the probability of bar passage, and/or (c) enhances their long-term career goals. After thirty hours of in-class instruction, most law students can figure out whether one of these criteria has been met.
Second, institutional incentives strongly favor scholarship over
teaching. Promotion and tenure, salary raises, and lateral mobility are all
based primarily or wholly on the quality and quantity of one’s scholarship. This point cannot be seriously disputed. Although
there is a coherent theoretical rationale for a connection between scholarship
and teaching, see, e.g., Brian Leiter, How to Rank
Law Schools, 81
Third, there is empirical evidence that the prevailing law school pedagogy falls short of well established principles of excellent teaching from the education literature. For the last four years, the Law School Survey of Student Engagement has obtained survey data from over 100 ABA-approved law schools. During the 2006 survey, the sample size was 24,492 and the response rate was 56%. Here are a few results that tie into established measures of effective teaching:
- Only 37.7% of respondents received prompt written or oral feedback from professors on a more than occasional basis; 23.9% “never” received any prompt written or oral feedback in any of their classes.
- 66.5% reported “never” working with faculty member on activities outside of class.
- 46.3% reported writing zero papers 20 pages or longer in length.
- Only 48.7% reported asking questions or contributing to discussions on a more than occasional basis.
- 33.9% reported “never” working with other students on
projects during class.
Facilitating high levels of student engagement is difficult, time-consuming work. Absence institutional incentives that reward this effort, we would expect a norm of reasonable teaching competency--rather than excellence--to prevail. I would argue that that is exactly what the LSSSE data shows. (Moreover, understanding this reality, is it any wonder why so many law school administrators resisted Ben's attempt to be included in the study?)
In conclusion, I do think there is a potentially important relationship between excellent teaching and scholarship. In the aggregate, Ben’s study shows no relationship. But in his tables, a few schools have statistically significant correlation coefficients, and they are all positive. So mining this relationship may be function of faculty culture, including a careful faculty screening process. A handful of schools may be pulling this off. For the benefit of our students, achieving such a culture is certainly a goal worth striving for.