So far I have discussed cultural cognition generally and outlined one interesting dynamic associated with it (“the white male status anxiety effect”). I now want to start to address the normative and prescriptive implications of cultural cognition for risk regulation.
Debates over risk regulation typically involve two competing models of risk perception. One, which can be called the rational weigher model, assumes that individuals (in aggregate, and over time) process information about risk in a manner consistent with expected utility. This position (which is associated with Kip Viscusi, among others) counsels a generally restrained role for governmental risk regulation: if people left to their own devices can be expected to make choices that maximize their well-being, then devising legal regimes and institutions to regulate risk-taking is both wasteful and inimical to individual freedom.
The irrational weigher model, in contrast, asserts that individuals systematically misprocess risk information as a result of cognitive limits and biases. Proponents of this view such as Cass Sunstein and Stephen Breyer advocate entrusting matters of environmental regulation, consumer protection, workplace safety and the like to experts, who should be insulated as much as possible from politics to avoid the distorting influence of the public’s misapprehension of risk.
As a positive matter, neither the rational- nor the irrational-weigher model predicts the relationship between risk and cultural values we found our National Risk and Culture Survey. There’s no reason to think that hierarchs and individualists have more or less access to information about risk than do egalitarians and solidarists, or that one or the others of these types is more bounded in their rationality.
To explain why persons of these persuasions do in fact form systematically different views about the danger of nuclear power, guns, abortion, unsafe sex, smoking etc., requires a third model of risk perception. We propose to call it the cultural evaluator model, which emphasizes the impact of cultural values both on preferences about what risks are worth taking, and on beliefs about what empirical information is worth crediting.
What are the normative implications of the cultural evaluator model? Well, to start, it sharply undermines the case for expertise made by Sunstein, Breyer and other irrational-weigher theorists. As cultural evaluators, individuals adopt the factual beliefs about risk that express their commitments to one or another vision of the good society. In this circumstance, expressive valuations (“capitalism denigrates social solidarity”; “owning a gun enables self-reliance”) will be essentially interchangeable with corresponding factual beliefs (“nuclear power is an unsafe technology”; “owning a gun makes society safer”). Accordingly, if we give politically insulated experts the power to override popular factual beliefs about risks, we are necessarily delegating to them the power to override public values as well.
But that doesn’t necessarily mean that the law should leave risk regulation to the market as proponents of the rational-weigher model propose. On the contrary, the cultural evaluator model shows how many seemingly private decisions about how to deal with risk can instead be seen as attempts to impose partisan visions of the good society on those who disagree with it.
For example, do hospitals have an obligation under the “informed consent” doctrine to inform patients of the HIV positive status of medical personnel? The answer might seem to be “of course” if we understand “informed consent” doctrine to be implementing the individual right of patients to make decisions that affect their medical welfare. But the cultural evaluator model suggests that the demand for such information probably isn’t linked to “medical welfare” preferences in any straightforward sense. Our study found that hierarchists, and not egalitarians, individualists, or solidarists, rate the risk of infection from an HIV surgeon as a serious one. If what makes hierarchists attend to this risk while shrugging off many more serious ones is their preference to see the law reflect their contested worldview, why should the law credit that preference at the expense of the medical personnel and other patients who would be adversely affected by it?
If the cultural evaluator model undermines the case for expertise and markets, it might be thought to bolster the case for a more unabashedly populist approach to risk regulation. If what’s at stake in debates about risk is the kind of society--hierarchical or egalitarian, individualist or communitarian--the law should express, then it might seem obvious that citizens should be empowered to decide that issue collectively, through democratic deliberation and electoral politics.
But I, at least, would be cautious about unconstrained populism, too. It doesn’t follow that popular risk perceptions should be normative for law just because they reflect judgments of cultural value. Those values themselves might be noxious, especially if they denigrate democratic ideals such as equality and individual autonomy. In addition, the enforcement of policies based on them might still have unacceptable consequences, either for segments of society or for society as a whole. And in any case, because citizens disagree about the nature of the good society, the imposition of one partisan position on that question, even in the guise of risk regulation, violates tenets of liberal neutrality.
But if not expertise, markets, or populist democracy, what does the cultural evaluator model recommend as the appropriate guide for risk regulation? Well, I said I was only going to start to address the normative and prescriptive implications of the cultural evaluator model in this post. So stay tuned, and I’ll continue the argument tomorrow--when I’ll introduce the “cultural self-affirmation effect” and the fascinating experimental work of Geoffrey Cohen on which this strategy of risk communication rests!
Comments