From Cass Sunstein, Obama's former Czar of the Office of Information and Regulatory Affairs, at the New York Review, "It’s For Your Own Good!":
In the United States, as in many other countries, obesity is a serious problem. New York Mayor Michael Bloomberg wants to do something about it. Influenced by many experts, he believes that soda is a contributing factor to increasing obesity rates and that large portion sizes are making the problem worse. In 2012, he proposed to ban the sale of sweetened drinks in containers larger than sixteen ounces at restaurants, delis, theaters, stadiums, and food courts. The New York City Board of Health approved the ban.RTWT.
Many people were outraged by what they saw as an egregious illustration of the nanny state in action. Why shouldn’t people be allowed to choose a large bottle of Coca-Cola? The American Beverage Association responded with a vivid advertisement, depicting Mayor Bloomberg in a (scary) nanny outfit.
But self-interested industries were not the only source of ridicule. Jon Stewart is a comedian, but he was hardly amused. A representative remark from one of his commentaries: “No!…I love this idea you have of banning sodas larger than 16 ounces. It combines the draconian government overreach people love with the probable lack of results they expect.”
Many Americans abhor paternalism. They think that people should be able to go their own way, even if they end up in a ditch. When they run risks, even foolish ones, it isn’t anybody’s business that they do. In this respect, a significant strand in American culture appears to endorse the central argument of John Stuart Mill’s On Liberty. In his great essay, Mill insisted that as a general rule, government cannot legitimately coerce people if its only goal is to protect people from themselves. Mill contended that
the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or mental, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinion of others, to do so would be wise, or even right.A lot of Americans agree. In recent decades, intense controversies have erupted over apparently sensible (and lifesaving) laws requiring people to buckle their seatbelts. When states require motorcyclists to wear helmets, numerous people object. The United States is facing a series of serious disputes about the boundaries of paternalism. The most obvious example is the “individual mandate” in the Affordable Care Act, upheld by the Supreme Court by a 5–4 vote, but still opposed by many critics, who seek to portray it as a form of unacceptable paternalism. There are related controversies over anti-smoking initiatives and the “food police,” allegedly responsible for recent efforts to reduce the risks associated with obesity and unhealthy eating, including nutrition guidelines for school lunches.
Mill offered a number of independent justifications for his famous harm principle, but one of his most important claims is that individuals are in the best position to know what is good for them. In Mill’s view, the problem with outsiders, including government officials, is that they lack the necessary information. Mill insists that the individual “is the person most interested in his own well-being,” and the “ordinary man or woman has means of knowledge immeasurably surpassing those that can be possessed by any one else.”
When society seeks to overrule the individual’s judgment, Mill wrote, it does so on the basis of “general presumptions,” and these “may be altogether wrong, and even if right, are as likely as not to be misapplied to individual cases.” If the goal is to ensure that people’s lives go well, Mill contends that the best solution is for public officials to allow people to find their own path. Here, then, is an enduring argument, instrumental in character, on behalf of free markets and free choice in countless situations, including those in which human beings choose to run risks that may not turn out so well.
Mill’s claim has a great deal of intuitive appeal. But is it right? That is largely an empirical question, and it cannot be adequately answered by introspection and intuition. In recent decades, some of the most important research in social science, coming from psychologists and behavioral economists, has been trying to answer it. That research is having a significant influence on public officials throughout the world. Many believe that behavioral findings are cutting away at some of the foundations of Mill’s harm principle, because they show that people make a lot of mistakes, and that those mistakes can prove extremely damaging.
For example, many of us show “present bias”: we tend to focus on today and neglect tomorrow. For some people, the future is a foreign country, populated by strangers. Many of us procrastinate and fail to take steps that would impose small short-term costs but produce large long-term gains. People may, for example, delay enrolling in a retirement plan, starting to diet or exercise, ceasing to smoke, going to the doctor, or using some valuable, cost-saving technology. Present bias can ensure serious long-term harm, including not merely economic losses but illness and premature death as well.
People also have a lot of trouble dealing with probability. In some of the most influential work in the last half-century of social science, Daniel Kahneman and Amos Tversky showed that in assessing probabilities, human beings tend to use mental shortcuts, or “heuristics,” that generally work well, but that can also get us into trouble. An example is the “availability heuristic.” When people use it, their judgments about probability—of a terrorist attack, an environmental disaster, a hurricane, a crime—are affected by whether a recent event comes readily to mind. If an event is cognitively “available”—for example, if people have recently suffered damage from a hurricane—they might well overestimate the risk. If they can recall few or no examples of harm, they might well underestimate the risk.
A great deal of research finds that most people are unrealistically optimistic, in the sense that their own predictions about their behavior and their prospects are skewed in the optimistic direction.6 In one study, over 80 percent of drivers were found to believe that they were safer and more skillful than the median driver. Many smokers have an accurate sense of the statistical risks, but some smokers have been found to believe that they personally are less likely to face lung cancer and heart disease than the average nonsmoker. Optimism is far from the worst of human characteristics, but if people are unrealistically optimistic, they may decline to take sensible precautions against real risks. Contrary to Mill, outsiders may be in a much better position to know the probabilities than people who are making choices for themselves.
Emphasizing these and related behavioral findings, many people have been arguing for a new form of paternalism, one that preserves freedom of choice, but that also steers citizens in directions that will make their lives go better by their own lights. (Full disclosure: the behavioral economist Richard Thaler and I have argued on behalf of what we call libertarian paternalism, known less formally as “nudges.") For example, cell phones, computers, privacy agreements, mortgages, and rental car contracts come with default rules that specify what happens if people do nothing at all to protect themselves. Default rules are a classic nudge, and they matter because doing nothing is exactly what people will often do. Many employees have not signed up for 401(k) plans, even when it seems clearly in their interest to do so. A promising response, successfully increasing participation and strongly promoted by President Obama, is to establish a default rule in favor of enrollment, so that employees will benefit from retirement plans unless they opt out. In many situations, default rates have large effects on outcomes, indeed larger than significant economic incentives.
Default rules are merely one kind of “choice architecture,” a phrase that may refer to the design of grocery stores, for example, so that the fresh vegetables are prominent; the order in which items are listed on a restaurant menu; visible official warnings; public education campaigns; the layout of websites; and a range of other influences on people’s choices. Such examples suggest that mildly paternalistic approaches can use choice architecture in order to improve outcomes for large numbers of people without forcing anyone to do anything.
In the United States, behavioral findings have played an unmistakable part in recent regulations involving retirement savings, fuel economy, energy efficiency, environmental protection, health care, and obesity. In the United Kingdom, Prime Minister David Cameron has created a Behavioural Insights Team, sometimes known as the Nudge Unit, with the specific goal of incorporating an understanding of human behavior into policy initiatives. In short, behavioral economics is having a large impact all over the world, and the emphasis on human error is raising legitimate questions about the uses and limits of paternalism.
No comments:
Post a Comment