« Teaching Evaluations | Main | Teaching - Writing Correlation »

02 August 2006

Comments

Melbourne

Very helpful post. Thank you..

Time
Management

Tracy Lightcap

These are all pretty easy answers, aren't they? Either the instruments are off or the stus are rewarding easy graders or they aren't doing the work or something.

I have to say I think the reason is actually pretty simple: bad pedagogy. I can't really speak to law schools, but I can tell you that we get the same excuses at my college and, I would guess, for pretty much the same reasons. It really is a lot easier to keep teaching in the "tried and true" ways instead of trying something new; saves wear and tear and, after all, as Bill points out, normally one isn't rewarded for teaching.

So how could it be done differently? I just attended one of the "Reacting to the Past" conferences at Barnard College. Using Reacting in courses at all levels would probably be useful. I know the idea of using complex simulations in law school (horrors!) might not appeal to others here. It would probably work, however. You want to know more go to the Reacting website at Barnard.

Jim Maule

Student evaluations from disaffected students (chiefly second and third year students) skew the numbers. Not all law students fit the portrait that is painted of them. There are students compelled to attend law schools by parents who want a lawyer in the family (something I never noticed until I joined a faculty and met proud parents at graduation and noticed the cringing child). There are students who don't know what else to do, or think law school is something that they can handle with minimum effort to attain high salaries.

Students who are not willing to do the work required of them to generate the feedback too few of us provide conclude (as some have told me to my face) that teachers who require them to do what they consider to be more than acceptable levels of work are not good teachers "for them." Students who enjoy the work and fit the painted portrait react differently.

What matters is the REASON for the positive or negative rating from a student. That's why the comments are so important, and it's tough to quantify them. I have seen student evaluation comments praising the entertainment skills of a professor, or the easy going nature of the class (translation: listen and regurgitate). I have seen student evaluations trash professors who expect students to read before class.

Where do students get these ideas? By comparing demanding faculty with not so demanding faculty and assuming that the teacher whose style suits them best is the benchmark against which to judge other faculty. There are faculty earning high evaluation scores who register high in my "can they teach" book and others who, to me, are doing far more damage to the educational process than they or anyone else (faculty or students) care to realize.

In our Graduate Tax Program course evaluations we ask a question that for some reason we don't ask of J.D. students. "How many hours a week did you invest in the course outside of class?" (we also ask about attendance). There is a huge correlation between number of hours invested and perceived value of the teaching. That makes sense. Students who don't prepare are lost in class, and most consider it to be the professor's fault. Every once in a while, a student will comment to the effect "I suppose had I attended more classes and done the required work I would have appreciated what was happening in the classroom."

For me, the best measure of teaching ability would come from unannounced in-class audits by education specialists posing as students. Costly? Yes. Time-consuming? Yes. One gets what one pays for. Student evaluations cost little, are filled out in minutes (if at all, which is another problem), and other than some sensible comments, lack value.

Other ways of evaluating teaching, though flawed, include measuring the bar exam performance of students who took Prof X and Prof Y. Flaws: doesn't work for courses not on the bar exam, and doesn't filter out the incompetent, lazy, or inattentive student's contribution to his or her own failures.

Likewise, looking at the success or failure of graduates in practice, would be interesting. But how does one measure success? And how does one attribute the grad's track record to a faculty member?

Perhaps, though, the 3-years-out evaluation makes some sense. Once the student has experienced practice, it puts law school classes into a new light. Many's a grad who has told me that only after entering practice did he or she understand why I structured my courses as I did, and that the "fun" courses turned out to have far less value. Yet which course's teacher did the student rate as a "better" teacher when the student was in law school? Hmmm.

My views were cemented by a comment on an evaluation from last semester's Partnership Taxation course. "He's mean." I know that came from a student who approached me mid-semester, asking for assistance, because they had not done any work in the course because of another enterprise in which they were engaged. I made it clear that they were in a huge hole, and would need to invest even more time to dig out of it. They wanted quick answers, some sort of "here's the black letter law and problem answers" response, and not my message. Maybe it was mean. But it isn't bad teaching.


William Henderson

Jeff Y. raises a useful point, which is similar to Jeff Stake's "complex causality" theory. Ben has amassed an amazing, valuable data set. At this juncture, it is worth finding out what *is* correlated with high teaching evaluations.

In addition to Jeff Y's "grade inflation" bias, which can be tested by addiing in historical grade means, Ben might want to explore whether the "line tilts" after controlling for either class size, class subject matter, std. dev., law school attended, practice experience, PhD, or years professor has been teaching.

In all candor, I was a little suprised to find *no* relationship between teaching and scholarship.

Jeff Yates

One related hypothesis that could be investigated is the age-old axiom that teaching evaluations are a product of grades. In other words, teachers who give higher grades get higher student evaluations. I imagine that this information might be available in Ben's data set.

********************
Jeff Yates - J.D., Ph.D.
Associate Professor
Department of Political Science
University of Georgia
http://www.uga.edu/pol-sci/people/yates.htm
SSRN page: http://ssrn.com/author=454290
*********************

The comments to this entry are closed.

Conferences

April 2025

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      

Site Meter


Creative Commons License


  • Creative Commons License
Blog powered by Typepad