This year, I was fortunate to an organizer of the 6th Annual ISBA Solo & Small Firm Conference. In wrapping up the conference and preparing for next year, one of our first tasks was to review the speaker evaluations generated by a SurveyMonkey.com questionnaire--and note, I was one of the speakers.
Although there is a controversy within the academy on whether teacher evaluations can be trusted, my colleagues at the ISBA had no problem using these scores to make future programming decisions. Note that the organizers attended a large proportion of the sessions (and a few of us were also presenters); the evaluations seems to confirm our own impressions of speaker quality. There were no surprises. (Disclosure: my own evaluations were good but not spectacular.)
I got the impression that the ISBA approached the situation in the same way as any business trying to improve its product: Each speaker got a copy of his or her scores plus excerpts from the narrative comments; many will be invited back, but a few will not. Frankly, after reflecting on this experience, I think some of the academic debates on the value of student evaluations (see, e.g., this bibliography) would be ridiculed by practicing lawyers who are used to delivering value or losing a client. Virtually all lawyers involved in the conference would agree that the consensus view of their colleagues is what matters. This is a very pragmatic approach that is hard to dismiss.
If the judgment of lawyers can be trusted, what about law students? In an earlier post, I defended the validity of law school teaching evaluations. A recent article by Deborah Jones Merritt, "Bias, the Brain, and Student Evaluations," has the right idea: Refine the teacher evaluation process and improve its validity. But don't make the leap that the quality of legal instruction cannot be measured.