A major advantage of content analysis is the ability to delegate much of the grunt work to student coders. But our experience, and that of some others we’ve heard from, is that students can be dangerously prone to errors or misunderstandings. Avoiding these requires laborious stages of piloting, training, documentation, and double-coding. Is this worth the effort? Fewer than a quarter of the 125 projects we reviewed used student coders primarily; in the rest, the authors appeared to do their own coding.
Our view is that student coders should be used when this adds value and not just to save effort. In other words, sometimes students can do a better job than faculty can on their own. Here’s why.
Realize, first, that the methodologic purpose of content analysis is to bring scientific objectivity to a reading of cases. If that isn’t your purpose, then why bother? The method’s scientific rigor arises from the replicability of coding, which should be demonstrated rather than presumed. This usually requires double-coding to show inter-coder reliability (or consistency) – i.e., that different readers would find the same content you found. But, it can be difficult for non-expert student coders to agree on the more legally-relevant content of cases. Faculty coding may be essential when analyzing more subtle, latent, or judgmental -- i.e., more interesting -- aspects of cases.
The reliability problem isn’t solved, however, simply by shifting to faculty coding. Just because you’re an expert, your coding is no more self-validating than is a student’s. That’s why, from a scientific point of view, coding by 3 students is more rigorous than coding by the single greatest scholar in the world. To demonstrate replicability, faculty who do their own coding should also have some of it double-coded independently, perhaps by a similarly-expert colleague. But, when the subject matter allows for student coding, that may be a better way to go, both for reasons of expediency and reliability.