This year, I was fortunate to an organizer of the 6th Annual ISBA Solo & Small Firm Conference. In wrapping up the conference and preparing for next year, one of our first tasks was to review the speaker evaluations generated by a SurveyMonkey.com questionnaire--and note, I was one of the speakers.
Although there is a controversy within the academy on whether teacher evaluations can be trusted, my colleagues at the ISBA had no problem using these scores to make future programming decisions. Note that the organizers attended a large proportion of the sessions (and a few of us were also presenters); the evaluations seems to confirm our own impressions of speaker quality. There were no surprises. (Disclosure: my own evaluations were good but not spectacular.)
I got the impression that the ISBA approached the situation in the same way as any business trying to improve its product: Each speaker got a copy of his or her scores plus excerpts from the narrative comments; many will be invited back, but a few will not. Frankly, after reflecting on this experience, I think some of the academic debates on the value of student evaluations (see, e.g., this bibliography) would be ridiculed by practicing lawyers who are used to delivering value or losing a client. Virtually all lawyers involved in the conference would agree that the consensus view of their colleagues is what matters. This is a very pragmatic approach that is hard to dismiss.
If the judgment of lawyers can be trusted, what about law students? In an earlier post, I defended the validity of law school teaching evaluations. A recent article by Deborah Jones Merritt, "Bias, the Brain, and Student Evaluations," has the right idea: Refine the teacher evaluation process and improve its validity. But don't make the leap that the quality of legal instruction cannot be measured.
The timing of student reviews is suspect. They are done before the only test for the course. It would be a bit like asking the University of Michigan football players to have rated their coach before they played that first game against Appalachian State. Doesn't this increase the risk that the ranking will be based on entertainment value rather than knowledge conveyed?
Posted by: Josh Ard | 05 September 2007 at 07:12 PM
Bill understates the ratings the attendees gave him for his presentation. A couple tough reviews (probably former students) but otherwise the ranking and comments were very favorable, as you would expect knowing Bill.
I also presented with Bill, and have seen the value of his contributions to this group of lawyers generally underserved by the legal academies.
Posted by: ted waggoner | 25 August 2007 at 10:21 AM