From:

Risk, The Science and Politics of Fear

by Dan Gardner, 2008

Excerpted mostly from chapters 2, 4, 6, 8, 10, 11

Four decades ago, scientists knew little about how humans perceived risks, how we judged which risks to fear and which to ignore, and how we decided what to do about them. But in the 1960s, pioneers like Paul Slovic, today a professor at the University of Oregon, set to work. They made startling discoveries and over the ensuing decades, a new body of science grew. The implications of this new science were enormous for a whole range of different fields. In 2002, one of the major figures in this research, Daniel Kahneman, won the Nobel Prize in economics, even though Kahneman is a psychologist who never took so much as a single class in economics. What the psychologists discovered is that a very old idea is right. Every human brain has not one but two systems of thought. They called them System One and System Two. One is Head, the other, Gut.


So we have, in effect, two minds working semi-independently of each other. Further complicating our thoughts is the constant, complex interaction between the two. It’s possible, for example, that knowledge learned and used consciously by Head can sink into the unconscious mind, to be used by Gut. Every veteran golfer has experienced this process. When you first pick up a club, you consciously follow instructions. Keep head back, knees bent, right arm straight. Beginners think about each of these points consciously and carefully. They can’t just step up to the tee and swing. But do this often and long enough and you no longer have to think about it. Proper form just feels right and it happens much more quickly and fluidly. In fact once it has been internalized, consciously thinking about what you’re doing can interrupt the flow and hurt performance – which is why professional athletes are taught by sports psychologists to avoid thinking about the motions they have done thousands of times before.


Even the most cerebral actions can undergo this shift from Head to Gut. Neophyte doctors faced with a common ailment consciously and carefully think about the checklist of symptoms before making a diagnosis, but old hands “feel” the answer in an instant. Art historians whose job is to authenticate antiquities make the same transition. In the now-famous anecdote that opens Malcolm Gladwell’s book Blink, a Greek statue that had supposedly been authenticated by a battery of scientific tests was nonetheless instantly dismissed as a fraud by several art historians. Why? The experts couldn’t say. They just felt that something was wrong – one called it “intuitive repulsion.” Testing later confirmed the statue was indeed a fraud, a truth the experts were able to feel in a instant because they had studied and analyzed Greek statues for so long that their knowledge and skills had been absorbed into the unconscious operations of Gut.


I know this because the questions I’ve asked come from a study conducted by German psychologists Fritz Strack and Thomas Mussweiler. They asked people two versions of the Gandhi questions. One version is to ask whether Gandhi was older or younger than 9 when he died. The other began by asking people whether Gandhi was older or younger than 140 when he died, which was followed by the same direction to guess Gandhi’s age when he died. Strack and Mussweiler found that when the first question mentioned the number nine, the average guess on the following questions was 50. In the second version, the average guess was 67. So those who heard the lower number before guessing guessed lower. Those who heard the higher number guessed higher.
Psychologists have conducted many different variations on this experiment. In one version, participants were first asked to construct a single number from their own phone numbers. They were then asked to guess the year in which Attila the Hun was defeated in Europe. In another study, participants were asked to spin a wheel of fortune in order to select a random number – and then they were asked to estimate the number of African nations represented in the United Nations. In every case, the results are the same: The number people hear prior to making a guess influences that guess. The fact that the number is unmistakeably irrelevant doesn’t matter. This is the Anchoring Rule.
By now, the value of the Anchoring Rule to someone marketing fear should be obvious. Imagine that you are, say, selling software that monitors computer usage. Your main market is employers trying to stop employees from surfing the Internet on company time. But then you hear a news story about pedophiles luring kids in chat rooms and you see that this scares the hell out of parents. So you do a quick Google search and you find the biggest, scariest statistic you can find – 50,000 pedophiles on the Internet at any given moment – and you put it in your marketing. Naturally, you don’t question the accuracy of the number. That’s not your business. You’re selling software.


Four decades ago, Kahneman and Tversky collaborated on research that looked at how people form judgements when they’re uncertain of the facts. When Kahneman and Tversky began their work, the dominant model of how people make decisions was that of Homo economicus “Economic man” is supremely rational. He examines evidence. He calculates what would best advance his interests as he understands them, and he acts accordingly.
“For every problem there is a solution that is simple, clean and wrong,” wrote H.L. Mencken, and the Homo economicus model is all that. Unlike Homo economicus, Homo sapiens is not perfectly rational. Proof of that lies not in the fact that humans occasionally make mistakes. The Homo economicus model allows for that. It’s that in certain circumstances, people always make mistakes. We are systematically flawed. In 1957, Herbert Simon, a brilliant psychologist/economist/political scientist and future Nobel laureate, coined the term bounded rationality. We are rational, in other words, but only within limits. Amos Tversky died in 1996. In 2002, Daniel Kahneman experienced the academic equivalent of a conquering general’s triumphal parade: He was awarded the Prize in Economic Sciences in Memory of Alfred Nobel. He was probably the only winner in the history of the prize who never took so much as a single class in economics. The amazing thing is that the Science article, which sent shock-waves out in every direction, is such a modest thing on its face. Kahneman and Tversky didn’t say anything about rationality. They didn’t call Homo economicus a myth. All they did was lay out solid research that revealed some of the heuristics – the rules of thumb – Gut uses to make judgments such as guessing how old Gandhi was when he died or whether it’s safe to drive to work.


Like the paper itself, the three rules of thumb it revealed were admirably simple and clear. The first – the Anchoring Rule – we’ve already discussed. The second is what psychologists call the representativeness heuristic, which I’ll call the Rule of Typical Things. And finally, there is the availability heuristic, or the Example Rule, which is by far the most important of the three in shaping our perceptions and reactions to risk.


The Rule of Typical Things
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
How likely is it that Linda

When Kahneman and Tversky gave this quiz to undergraduate students, 89 per cent decided it was more likely that Linda was a bank teller who is active in the feminist movement than that she is a bank teller alone. But if you stop and think about it, that makes no sense. How can it be more likely that Linda is a bank teller and a feminist than that she is solely a bank teller? If it turns out to be true that she is a bank teller and a feminist, then she is bank teller – so the two descriptions have to be, at minimum, equally likely. What’s more, there is always the possibility that Linda is a bank teller but not a feminist. So it has to be true that it is more likely that she is a bank teller alone than that she is a bank teller and a feminist. It’s simple logic – but very few people see it.
So Kahneman and Tversky stripped the quiz down and tried again. They had students read the same profile of Linda. But then they simply asked whether it is more likely that Linda is (a) a bank teller or (b) a bank teller who is active in the feminist movement?
Here, the logic is laid bare. Kahneman and Tversky were sure people would spot it and correct their intuition. But they were wrong. Almost exactly the same percentage of students – 85 per cent – said it is more likely that Linda is a bank teller and a feminist thank a bank teller only.
Kahneman and Tversky also put both versions of the “Linda problem,” as they called it, under the noses of experts trained in logic and statistics. When the experts answered the original question, with its long list of distracting details, they got it just as wrong as the undergraduates. But when they were given the two-line version, it was as if someone had elbowed them in the ribs. Head stepped in to correct Gut and the error rate plunged. When the scientist and essayist Stephen Jay Gould took the test, he realized what logic – his Head – told him was the right answer. About that didn’t change what intuition – his Gut – insisted was true. “I know [the right answer],” he recounted, “yet a little homunculus in my head continues to jump up and down, shouting at me – ‘but she can’t just be a bank teller; read the descriptions.’”What’s happening here is simple and powerful. One tool Gut uses to make judgements is the Rule of Typical Things. At least, it makes no sense to Head.

To Gut, it makes perfect sense. One of Gut’s simplest rules of thumb is that the easier it is to recall examples of something, the more common that something must be. This is the “availability heuristic,” which I’ll call the Example Rule.
Kahneman and Tversky demonstrated the influence of the Example Rule in a typically elegant way. First, they asked a group of students to list as many words as they could think of that fit the form _ _ _ _ _ n _. The students had 60 seconds to work on the problem. The average number of words they came up with was 2.9. Then another group of students was asked to do the same, with the same time limit, for words that fit the form _ _ _ _ ing. This time, the average number of words was 6.4.
Look carefully and it’s obvious there’s something strange here. The first form is just like the second, except the letters “I” and “g” have been dropped. That means any word that fits the second form must fit the first. Therefore, the first form is actually more common. But the second form is much more easily recalled.
Armed with this information, Kahneman and Tversky asked another group of students to think of four pages in a novel. There are about 2,000 words on those four pages, they told students. “How many words would you expect to find that have the form _ _ _ _ ing?” The average estimate was 13.4 words. They then asked another group of students the same question for the form _ _ _ _ _ n _. The average guess was 4.7 words.
This experiment has been repeated in many different forms and the results are always the same: The more easily people are able to think of examples of something, the more common they judge that thing to be.


Note that it is not the examples themselves that influence Gut’s intuitive judgment. It is not even the number of examples that are recalled. It is how easily examples come to mind. In a revealing study, psychologists Alexander Rothman and Norbert Schwarz asked people to list either three or eight behaviours they personally engage in that could increase their chance of getting heart disease. Strangely, those who thought of three risk-boosting behaviours rated their chance of getting heart disease to be higher than those who thought of eight. Logically, it should be the other way around – the longer the list, the greater the risk. So what gives? The explanation lies in the fact – which Rothman and Schwarz knew from earlier testing – that most people find it easy to think of three factors that increase the risk of heart disease but hard to come up with eight. And it is the ease of recall, not the substance of what is recalled, that guides the intuition.
The most dramatic example was nuclear power. Laypeople, like experts, correctly said it inflicted the fewest fatalities of the items surveyed. But the experts ranked nuclear power as the 20th most risky item on a list of 30, while most laypeople said it was number one. Later studies had 90 items, but again nuclear power ranked first. Clearly, people were doing something other than multiplying probability and body count to come up with judgments about risk.
Slovic’s analyses showed that if an activity or technology were seen as having certain qualities, people boosted their estimate of its riskiness regardless of whether it was believed to kill lots of people or not. It it were seen to have other qualities, they lowered their estimates. So it didn’t matter that nuclear power didn’t have a big body count. It had all the qualities that pressed our risk-perception buttons , and that put it at the top of the public’s list of dangers.

Slovic's Checklist

There’s plenty of evidence for rationalization but the most memorable – certainly the most bizarre – was a series of experiments on so-called split-brain patients by neuroscientist Michael Gazzaniga. Ordinarily, the left and right hemispheres of the brain are connected and they communicate in both directions but one treatment for severe epilepsy is to sever the two sides. Split-brain patients function surprisingly well but scientists realized that because the two hemispheres handle different sorts of information, each side can learn something that the other isn’t aware of. This effect could be induced deliberately in experiments by exposing only one eye or the other to written instructions. In one version of his wok, Gazzangia used this technique to instruct the right hemisphere of a split-brain patient to stand up and walk. The man got up and walked. Gazzaniga then verbally asked the man why he was walking. The left hemisphere handles such “reason” questions and even though that hemisphere had no idea what the real answer was, the man immediately responded that he was going for a soda. Variations on this experiment always got the same result: The left hemisphere quickly and ingeniously fabricated explanations rather than admit it had no idea what was going on. And the person whose lips delivered these answers believed every word.
When a woman tells a researcher how risky she thinks nuclear power is, what she says is probably a reliable reflection of her feelings. But when the researcher ask the person why she feels the way she does, her answer is likely to be partly or wholly inaccurate. It’s not that she is being deceitful. It’s that her answer is very likely to be, in some degree, a conscious rationalization of an unconscious judgment. So maybe it’s true that what really bothers people about nuclear power are the qualities on Slovic’s checklist. Or maybe that stuff is just Head rationalizing Gut’s judgment. Or maybe it’s a little of both. The truth is we don’t know what the truth is.


In the years to come, however, the model of a two-track mind – Head and Gut operating simultaneously – advanced rapidly. A major influence in this development was the work of Robert Zajonc, a Stanford psychologist, who explored what we know simply as feeling or emotions. Zajonc insisted that we delude ourselves when we think that we evaluate evidence and make decisions by calculating rationally. “This is probably seldom the case,” he wrote in 1980. “We buy cars we ‘like’, we choose the jobs and houses we find ‘attractive’, and then justify those choices by various reasons.”
In a second experiment, sonic and Alhakami had students of the University of Oregon rate the risks and benefits of a technology (different trials used nuclear power natural gas, and food preservatives).
Then they were asked to read a few paragraphs describing some of the benefits of the technology. Finally, they were asked again to rate the risks and benefits of the technology. Not surprisingly, the positive information they read raised students' ratings of the technology's benefits in about one-half of the cases. But lots of those who raised their estimate of the technology's benefits also lowered their estimate of the risk - even though they had not read a word about the risk.-
Later trials in which only risks were discussed had the same effect but in reverse: People who raised their estimate of the technology's risks in response to the information about risk also lowered their estimate of its benefit.
Various names have been used to capture what's going on here, Slovic calls it the affect heuristic. I prefer to think of it as the Good - Bad Rule. When faced with something, Gut may instantly experience a raw feeling that something is Good or Bad. That feeling then guides the judgments that follows: ''Is this thing likely to kill  me? It feels good.
Good things don't kill . So, no. don't worry about it."
The Good-Bad Rule helps to solve many riddles. In Slovic's original studies, for example he found that people consistently underestimated the lethality of all diseases except one: The lethality of cancer was actually overestimate One reason that might be is the Example Rule. The media pay much more attention to cancer than diabetes or asthma and so people can easily recall examples of deaths rattled by cancer even if they don't have personal experience with the disease, But consider how you feel when you read the words diabetes and asthma. Unless you or someone you care about has suffered from these diseases, chances are they don't spark any emotions But what about the word cancer it's like a shadow slipping over the mind.
That shadow is affect - the "faint whisper of emotion," as Slovic calls it. We use cancer as a metaphor  in ordinary language - meaning some- thing black and hidden, eating away at what's good - precisely because the word stirs feelings. And those feelings shape and colour our conscious thoughts about the disease.

The Good-Bad Rule also helps explain our werid relationship with radiation. We fear nuclear weapons, reasonably enough, while nuclear power and nuclear waste also give us the willies. Most experts argue that nuclear power and nuclear waste are not nearly as dangerous as the public thinks they are, but people will not be budged.
On  the other hand, we pay good money to soak up solar radiation on a tropical beach and few people have the slightest qualms about deliberately exposing themselves to radiation when a doctor orders an X-ray. In fact, Slovic's  surveys confirmed that most laypeople underestimate  the ( minimal) dangers of X-rays.
Why don't we worry aborts sun-tanning? Habituation may play a role, but the Good-Bad Rule certainly does. Picture this: you lying on a beach in Mexico. How does that make you feel? Pretty good, And if it is a Good Thing, our feelings tell us it cannot be all that risky The same is true of X-rays. It is medical technology that saves lives.
They are a Good Thing. and that feeling eases any worries about the risk they pose.
On the other end of the scale are nuclear weapons. They are a Very Bad Thing - which is a pretty reasonable conclusion given that they are designed to annihilate whole cities in a flash. But Slovic has found feelings about nuclear power and nuclear waste are almost as negative and when Slovic and some colleagues examined how the people of Nevada felt about a proposal to create a dump site for nuclear waste in that state, they found that people judged the risk of a nuclear waste repository to be at least as great as that of a nuclear plant or even a nuclear weapons testing site. Not even the most ardent anti-nuclear activist would make such an equation. It makes no sense - unless people's judgments are the product of intensely negative feeling to all things "nuclear.


We're not used to thinking of our feelings as the sources of our conscious decisions but research leaves no doubt. Studies of insurance, for example, have revealed that people are willing to pay more to inspire a car they feel is attractive than one that is not, even when the monetary value is the same. In 1993 study even found that people were willing to pay   more for airline travel insurance covering "terrorist acts" than for deaths from "all possible causes." Logically, that makes no sense, but "terrorist acts" is a vivid phrase dripping with bad feelings, while "all possible causes" is bland and empty It leaves Gut cold.
They asked Stanford University students to read one of three versions of a stops about a tragic death - the cause being either leukemia, fire or murder - that contained no information about how common such tragedies are. They then gave the students a list of risks - including the risk in the story and 12 others - and asked them to estimate how often they kill. As we might expect, those who read a tragic story about a death caused by leukemia rated leukemia's lethality higher than a control group of students who didn't read the story. The same with fire and murder. More surprisingly, reading the stories led to increased estimates for all the risks, not just the one portrayed. The fire story caused an overall increase in perceived risk of 14 per cent The leukemia story raised estimates by 73 per cent The murder story led the pack, raising risk estimates by 144 per cent. A "good news" story had precisel the opposite effect - driving down perceived risks across the board.


So far, I've mentioned things - murder, terrorism cancer - that deliver an unmistakable emotional wallop. But scientists have shown that Gut's emotional reactions can be much subtler than that. Robert Zajonc along with psychologists Piotr Winkielman and Norbert Schwarz. conducted a series of experiments in which Chinese ideographs flashed briefly on a screen. Immediately after seeing an ideograph, the test subjects, students at the university of Michigan were asked to rate the image from one to six, with six being very liked and one not liked at likely (Anyone familiar with the Chinese, Korean, or Japanese languages was   excluded from the study, so the images held no literal meaning for those who saw them.)
What the students weren't told is that just before the ideograph appeared, another image was flashed. In some cases, it was a smiling face. In others, it was a frowning face or a meaningless polygon. These images appeared for the smallest fraction of a second, such a brief moment that they did not register on the conscious mind and no student reported seeing them. But even this tiny exposure to a good or bad image had a profound effect on the students' judgment. Across the board, ideographs preceded by a smiling face were liked more than those that weren't positively primed. The frowning face had the same effect in the opposite direction.
Clearly, emotion had a powerful influence and yet not one student reported feeling any emotion Zajonc and other scientists believe that can happen because the brain system that slaps emotional labels on things - nuclear power bad! - is buried within the unconscious mind.
So your brain can feel something is good or bad even though you never consciously feel good or bad. (When the students were asked what they based their judgments on, incidentally they cited the ideograph's aesthetics or they said that it reminded them of something, or they simply insisted that they "just liked it." The conscious mind hates to admit it simply doesn't know.) After putting students through the routine outlined above, Zajonc and his colleagues then repeated the test. This time, however, the images of faces were switched around. If an ideograph had been preceded by a smiling face in the first round, it got a frowning face and vice versa The results were startling. Unlike the first round, the flashed images had little effect. People stuck to their earlier judgments An ideograph judged likeable in the first round because - unknown to the person doing the judging - it was preceded by a smiling face was judged likeable in the second round even though it was preceded by a frowning face. So emotional labels stick even if we don't know they exist.
In earlier experiments - since corroborated by a massive amount of research - Zajonc also revealed that positive feeling for something can be created simply by repeated exposure to it, while positive feelings can be strengthened with more exposure. Now known as the mere exposure effect, this phenomenon is neatly summed up in the phrase "familiarity" breeds liking ." Corporations have long understood this, even if only intuitively. The point of much advertising is simply to expose people to a corporation's name and logo in order to increase familiarity and, as a result, positive feelings toward them.


The Good-Bad Rule also makes language critical. The world does not come with explanatory notes, after all In seeing and experiencing tunings, we have to frame them this way or that to make sense of them, to give them meaning. That framing is done with language.
Life and death are somewhat more emotional matters than lean and fat beef, so it's not suprising that the words a doctor chooses can be even more influential than those used in Levin and Gaeth's experiment. A 1982 experiment by Amos Tversky and Barbara   McNeil demonstrated this by asking people to imagine they were patients with lung cancer who had to decide between radiation treatment and surgery One group was told there was a 68 per cent chance of being alive a year after the surgery. The other was told there was a 32 per cent chance of dying. Framing the decision in terms of staying alive resulted in 44 per cent opting for surgery over radiation treatment.But when the information was framed as a chance of dying that dropped to 18 per cent. Tversky and McNeil repeated this experiment with physicians  and got the same results. In a different experiment, Tversky and Daniel Kahneman also showed that when people were told a flu outbreak was expected kill 600 people, people's judgments about which program should be implemented to deal with the outbreak were heavily influenced by whether the expected program results were described in terms of lives saved (200) or lives lost (400) 
Of course the most vivid form of communication is the photographic image and, not surprisingly, there's plenty of evidence that awful frightening photos not only grab our attention and stick in our memories - which makes them influential via the Example Rule - they conjure emotions that influence our risk perceptions via the Good-Bad Rule. It's one thing to tell smokers their habit could give them lung cancer. It's quite another to see the blackened gnarled lungs of a dead smoker That's why several countries, including Canada and Australia, have replaced text-only health warnings on cigarette packs with horrible images of diseased lungs, hearts, and gums. They're not just repulsive They increase the perception of risk.
Number may even hinder the emotions brought  out by the presence of one suffering person. Paul Slovic, Deborah Small, and George Loewenstein set up an experiment in which people were asked to donate to African relief. One appeal featured a statistical overview of the crisis, another profiled a seven-year-old girl, and a third provided both the profile and the statistics . Not suprisingly, the profile generated much more giving than the statistics alone, but it also did better than the combined profile-and-statistics pitch - as if the numbers somehow interfered with the empathetic urge to help generated by the profile of tine little girl. A curious side effect of our inability to feel large numbers confirmed in many experiments - is that proportions can influence our thoughts more than simple numbers. When Paul Slovic asked groups of students to indicate on a scale from 0 to 20 to what degree they would support the purchase of airport safety equipment, he found they expressed much stronger support when told that the equipment could be expected to save 98 per cent of 150 lives than when they were told it would save 150 lives. Even saving "85 per cent of 150 lives" garnered more support than saving 150 lives. The explanation ties in the lack of feeling we have for the number 150. it's vaguely good, because it represents people's lives, but it's abstract. We can't picture 150 lives and so we don't feel 150 lives. We can feel proportions, however. Ninety-eight per cent is almost all. It's a cup filled nearly to overflowing. And so we find saving 98 per cent of 150 lives more compelling than saving 150 lives.

Japanese prostitutes were the first women to connect silicone and plumper breasts. It was the 1950s and American servicemen in Japan preferred breasts like they knew them back home so prostitues had themselves injected with silicone or liquid paraffin. The manufactured silicone breast implant followed in the early 1960s. In 1976, the United States Food and Drug Administration was given authority over medical devices, which meant the FDA could require manufacturers to provide evidence that a device is safe in order to get permission to sell it Breast implants were considered medical devices but because they had been sold and used for so many years without complaints the FDA approved their continued sale without any further research. It seemed the reasonable thing to do.
The first whispers of trouble came from Japanese medical journals. Some Japanese women were being diagnosed with connective tissue diseases - afflictions like rheumatoid arthritis, fibromyalgia and lupus. These women had also been injected, years before, with silicone, and doctors suspected the two facts were linked.
In 1982 an Australian report described three women with silicone breast implants and connective tissue diseases. What this meant wasn't clear. It was well known implants could leak or rupture but could silicone seep into the body and cause these diseases? Some were sure that was happening. The same year as the Australian report, a woman in San Francisco sued implant manufacturers, demanding millions of dollars for making her sick The media reported both these stories widely, raising concerns among more women and more doctors. More cases appeared in the medical literature. The number of diseases associated with implants grew. So did the number of the media. Fear spread.

In 1990, an episode of Face to Face With Connie Chung aired on CBS.
Tearful women told stories of pain, suffering, and loss. They blamed their silicone implants. And Chung agreed. First came the implants, then came the disease. What more needed to be said? The tone of the widely watched episode was angry and accusatory, with much of the blame focused on the FDA.
That broke the dam. Stories linking implants with disease - with headlines like "Toxic Breasts" and "Ticking Time Bombs" - flooded the media. A congressional hearing was held. Advocacy groups - induding Ralph Nader's Public Citizen - made implants a top target. Feminists - who considered breast augmentation to be "sexual mutilating," in the words of best-selling writer Naomi Wolf - attacked implants as a symbol of all that was wrong with modern society.

Under intense pressure, the FDA told manufacturers in early 1882 that they had 90 days to provide evidence that implants were safe. The manufacturers cobbled together what they could but the FDA felt it was inadequate. Meanwhile, a San Francisco jury awarded $7.34
million to a woman who claimed her implants, manufactured by Dow Corning, had given her mixed connective-tissue disease. The FDA banned silicone breast implants in April 1992 although it emphasized that the implants were being banned only because the had yet to be proved safe, as the manufacturers were required to do, not because they had been proved unsafe. The roughly one million American women with the implants shouldn't worry, the FDA chief insisted.
But they did worry. Along with the successful lawsuit the FDA ban was seen as proof that the implants were dangerous. The media filled with stories of suffering, angry women and "the trickle of law- suits became a flood," wrote Marcia Angell editor of the New England Journal of Medicine at the time and the author of the definitive book on the crisis, Science on Trial: The Clash Between Medical Science and the Law in the Breast Implant Case.
In 1994 the manufactuerers agreed to the largest class-action settlement in history A fund was created with $4.25 billion, including $1 billion for the lawyers who had turned implant lawsuits into a veritable industry. As part of the deal, women would have to produce medical records showing that they had implants and one of the many diseases said to be caused by implants but they didn't have to produce evidence that the disease actually was caused by the implants - either in their case or in women generally. "Plaintiffs' attorneys sometimes referred clients to clinicians whose practice consisted largely of such patients and whose fees were paid by the attorneys," wrote Angell. "Nearly half of all women with breast implants registered for the settlement, and half of those maimed to be currently suffering from implant-related illnesses." Not even the mammoth settlement fund could cover this. Dow Corning filed for bankruptcy and the settlement collapsed. The transformation of silicone implants was complete. Once seen as innocuous objects no more dangerous than silicone contact lenses, implants were now a mortal threat. In surveys Paul Slovic conduced around this time, most people rated the implants "high risk." Only cigarette Smoking was seen as more dangerous.

The breast-implant panic was at its peak in June 1994 when science finally delivered. A Mayo Clinic epidemiological survey published in the New England Journal of Medicine   found no link between silicone implants and connective-tissue disease. More studies followed, all with similar results. FInally, Congress asked the Institute of  Medicine (I.O.M.) the medical branch of the National Academies of Science, to survey the burgeoning research. In 1999  the I.O.M. issued its report. "Some women with breast implants are indeed very ill and the I.O.M.committee is very sympathetic to their distress," the report concluded. "However, it can find no evidence that these women are ill because of their implants."

In June 2004 Dow Corning emerged from nine years of bankruptcy; As part of its reorganization plan, the company created a fund of more than $2 billion in order to pay off more than 360 claims. Given the state of the evidence this knight seem like an unfair windfall for women with implants. It was unfair to Dow Corning, certainly, but it was no windfall. Countless women had been tormented for years by the belief that their bodies were contaminated and they could soon sicken and die. In this tragedy, only the lawyers won.
In  November 2006, the Food and Drug Administration lifted the ban on silicone breast implants. The devices can rupture and cause pain and inflammation, the FDA noted, but the very substantial evidence to date does not indicate that they pose a risk of disease.

Anti-implant activists were furious. They remain certain that silicone breast implants are deadly and it seems nothing can convince them otherwise. Psychologists call this confirmation bias, We all do it. Once a belief is in place, we screen what we see and hear in a biased way that ensures our beliefs are "proven" correct. Psychologists have also discovered that people are vulnerable to something called group polarization - which means that when people who share beliefs get together in groups, they become more convinced that their beliefs are right and they become more extreme in their views. Put confirmation bias, group polarization, and culture together, and we start to under- stand why people can come to completely different views about which risks are frightening and which aren't worth a second thought.

But that's not the end of psychology's role in understanding risk-.Far from it. The real starting point for understanding why we worry and why we don't is the individual human brain. In one of the earliest studies on confirmation bias, psychologist Peter Wason simply showed people a sequence of three numbers - 2, 4, 6 - and told them the sequence followed a certain rule. The participants were asked to figure out what that rule was. They could do so by writing down three more numbers and asking  if they were in line with the rule, Once you think you've figured out the rule the researchers instructed, say so and we will see if you're right.
It seems so obvious that the rule the numbers are following is "even numbers increasing by two." So let's say you were to take the  test. What would you say? Obviously, your first step would be to ask: What about 8, 10, 12? Does that follow the rule?" And you would be told, yes that follows the rule. Now you are really suspicious. This is far too easy. So you decide to try another set of number. Does "14, 16  18" follow the rule? It does.
At this point, you want to shout out the answer - the rule is even numbers increasing by two! - but you know   there's got to be a trick- here. So you decide to ask about another three numbers: 20, 22, 24,   Right again! Most people who take this test follow exactly this pattern. Every time their guess, they are told they are right and so, it seems, the evidence that they are right piles up. Naturally they become absolutely convinced that their initial belief is correct. Just look at all the evidence! And so they stop the test and announce that they have the answer: It is "even numbers increasing by two."
And they are told that they are wrong. That is not the rule. The correct rule is actually "any three numbers in ascending order."
Why do people get this wrong? It is very easy to figure out that the rule is not "even numbers increasing by two." All they have to do is try to disconfirm that the rule is even numbers increasing by two. They could, for example ask if "5, 7, 9" follows the rule. Do that and the answer would be, yes, it does - which would instantly disconfirm the hypothesis. But most people do not try to disconfirm. They do the opposite, trying to confirm the rule by looking for examples that fit it. That's a futile strategy. No matter how many examples are piled up, they can never prove that the belief is correct. Confirmation doesn't work.

Unfortunately, seeking to confirm our beliefs comes naturally, while it feels strange and counterintuitive to look for evidence that contradicts our beliefs. Worse still. if we happen to stumble across evidence that runs contrary to our views,  we have a strong   tendency to belittle or ignore it. In 1979 - when capital punishment was a top issue in the United States - American researchers brought together equal numbers of supporters and opponents of the death penalty. The strength of their views was tested. Then they were asked to read a carefully balanced essay that presented evidence that capital punishment deters crime and evidence that it does not. The researchers then re-tested people's opinions and discovered that they had only gotten stronger. They had absorbed the evidence that confirmed their views, ignored the rest, and left the experiment even more convinced that they were right and those who disagreed were wrong.
They power of confirmation bias should not be underestimated, During the U.S. presidential election of 2004, a team of researchers led by Drew Westen at Emory University brought together 50 committed partisans - half Democrats, half Republicans - and had them lie in magnetic resonance imaging (MRI) machines, While their brains were being scanned, they were shown a series of three statements by or about George W. Bush. The second statement contradicted the first, making Bush look bad. Participants were asked whether the statements were inconsistent and were then asked to rate how inconsistent they were. A third statement then followed that provided an excuse for the apparent contradiction between the statements.
Participants were asked if perhaps the statements were not as inconsistent as they first appeared. And finally, they were again asked to rate how inconsistent the first two statements were. The experiment was repeated with John Kerry as the focus and a third time with a neutral subject.
The superficial results were hardly surprising. When Bush supporters were confronted with Bush's contradictory statements, they, rated them to be less contradictory than Kerry supporters. And when the explanation was provided, Bush supporters considered it to be much more stistfactory than did Kerry supporters. When the focus was on John Kerry, the results reversed. There was no difference between Republicans and Democrats when the neutral subject was tested.
All this was predictable. Far more startling, however, was what showed up on the MRI. When people processed information that ran against their strongly held views - information that made their favoured candidate look bad - they actually used different parts of the brain than they did when they processed neutral or positive information. It seems confirmationg bias really is hard-wired in each of us, and that has enormous consequences for how opinions survive and spread.

That's on the individual level. What happens when people who share a belief get thogether to discuss it? Psychologists know the answer to that, and it's not pretty. They call it group polarization. It seems reasonable to think that when like -minded people get together to discuss a proposed hazardous waste site, or the breast implants they believe are making them sick, or some other risk, their views will tend to coalesce around the average within the group. But they won't. Decades of research has proved that groups usually come to conclusions that are more extreme than the average view of the individuals who make up the group. When opponents of a hazardous waste site gather to talk about it, they will become convinced the site is snore dangerous than they originally believed. When a woman who believes breast implants are a threat gets together with women who feel the same way she and all the women in the meeting are likely to leave believing they had previously underestimated the damager. The dynamic is always the same. It doesn't matter what the subject under discusssion is. It doesn't matter what the particular views are. When like-minded people get together and talk. their existing views tend to become more extreme.


Of course, it's possible that people's views could be moderated by hearing new information that runs in the opposite direction - an article by a scientist denying that implants cause disease. for example. But remember confirmation bias: Every person in that meeting is prone to accepting information that supports their opinion and ignoring or rejecting information that does not. As a result, the information that is pooled at the meeting is deeply biased, making it ideal for radicalizing opinions. Psychologists have also demonstrated that because this sort of polarization is based on information-sharing alone, it does not require anything like a face-to-face conversation - a fact amply demostrated every day on countless political blogs. Still, it is early days for this research. What is certain at this point is that we aren't the perfectly rational creatures described in outdated. economics textbooks and we don't     review information about risks with cool detachment and objectivity. We screen it to make it conform to what we already believe.   And what we believe is deeply influcenced by the beliefs of the people around us and of the culture in which we live.
In that sense, the metaphor I used at the start of this book is wrong. The intuitive human mind is not a lonely Stone Age hunter wandering a city it can scarcely comprehend. It is a Stone Age hunter wandering a city it can scarcely comprehend in the company of millions of other confused Stone Age hunters. The tribe may be a little bigger these days, and there may be more taxis than lions, but the old ways of deciding what to worry about and how to stay alive haven't changed.

The type of advertising also makes a difference. It turned out that the effect of the emotional ''enthusiasm'' ad was universal - it influenced everybody whether they knew anything about politics or not. But the effect of the fear-based ad was divided. It did not boost the rate at which those who knew less about politics who said they would get involved in politics by voting. But it did significantly influence those who knew more making them much more likely to say they would volunteer and vote. So the assumptions of political experts is wrong. It isn't the less informed who are likely to be influenced by fear-driven advertising.
It is the more informed. Apparently, greater awareness and commitment make emotional messages more resonant - and being better informed is no guarantee that Head will step in and tell Gut to relax.


Still, if the political experts were wrong about who is more likely to be influenced by fear, they were dead-on about the central role played by emotion in political marketing. "The audiovisual 'packaging' may be paramount to their effectiveness," Brader writes. Remove the word "may" and replace it with "is" and you have the standard advice supplied by every political consultant. "A visual context that supports and reinforces your language will provide a multiplier effect, making your message that much stronger," advises Republican guru Frank- Luntz in his book Words That Work. But more than that, "a striking  visual context can overwhelm the intended verbal message entirely." This sort of mismatch between tragic tale and cold numbers is routine in the media, particularly in stories about cancer. In 2001, researchers led by Wylie Burke of the University of Washington published an analysis of articles about breast cancer that appeared in major U.S. magazines between 1993 and 1997. Among the women that appeared in these stories, 84 per cent were younger than 50 years old when they were first diagnosed with breast cancer; almost half were under 40  But as the researchers noted. the statistics tell a very different story; Only 16 per cent of women diagnosed with breast cancer were younger than 50 at the time of diagnosis, is, and 3.6 per cent cent were under 40. As for the older women who are most at risk of breast cancer, they were almost invisible in the articles. Only a 2.3 per cent of the profiles featured women in their sixties and not one article out of 172 profiled a woman in her seventies - even though two-thirds of women diagnosed with breast cancer are 60 or older. In effect, the media turned the reality of breast cancer on its head. Surveys in Austrilia and the United Kingdom made the same discovery.

In Daniel Krewski's 2004 survey, "natural health products" were deemed by far the safest of the 30 presented - safer even than X-rays and tap water. Prescription drugs were seen to be riskier, while pesticides were judged to be more dangerous than street crime and nuclear power plants. It's not hard to guess at the thinking behind this, or to see how dominated it is by Gut. Natural and healthy are very good things so natural health products must be safe. Prescription drugs save lives, so while they may not be as safe as "natural health products" - everyone knows prescription drugs can have adverse effects - they are still good and therefore relatively safe. But ''pesticides'' are ''manmade" and ''chemical'' - and therefore dangerous. The irony here is that few of the "natural health products" that millions of people happily pop in their mouths and swallow have been rigorously tested to see if they work and the safety regulations they have to satisfy are generally quite weak - unlike the laws and regulations governing prescription drugs and pesticides.


The media, in pursuit of the dramatic story are another contributor to prevailing fears about chemicals. Robert Lichter and Stanley Rothman scoured stories about cancer appearing in the American media between 1972 and 1992 and found that tobacco was only the second-most mentioned cause of cancer - and it was a distant second. Man-made chemicals came first. Third was food addictives. Number 6 was pollution, 7 radiation, 9 pesticides, and 12 was dietary choices. Natural chemicals came 16th. Dead last on the list of 25 - mentioned in only nine stories - was the most important factor: aging. Lighter and Rothman also found that of the stories that expressed a view on whether the United States was facing a cancer epidemic, 85 per cent said it was. This has a predictable effect on public opinion. In November 2007 the American Institute of Cancer Research (AICR) released the results of a survey in which Americans were asked about the causes of cancer. The institute noted with regret that only 49 per cent of Americans identified a diet low in fruits and vegetables as a cause of cancer; 46 per cent said the same of obesity; 37 per cent, alcohol; and 56 per cent, diets high in red meat. But 71 per cent said pesticide residues on food cause cancer. "There's a disconnect between public fears and scientific fact," said an AICR spokesperson.


Lichter and Rothman argue that the media's picture of cancer is the result of paying too little attention to cancer researchers and far too much to environmentalists. As John Higginson noted almost 30 years ago the idea that synthetic chemicals cause cancer is "convenient" for activists opposed to chemical pollution, If DDT had threatened only birds, Rachel Carson would probably never have created the stir she did with Silent Spring. It's the connection between pollution and human health that makes the environment a personal concern, and connecting synthetic chemicals to health is easy because the chemicals are everywhere and Gut tells us they must be dangerous no matter how tiny the amounts may be. Add the explosive word cancer and you have a very effective way to generate support for environmental action.

But then, the exisistence of an ''epidemic of cancer" is often taken by environmentalists to be such an obvious fact that its existence hardly needs to be demonstrated. In a 2005 newspaper column, Canada's David Suzuki - a biologist and renowned environmentalist - blamed chemical contamination for the "epidemic of cancer afflicting us." His proof consisted of a story about catching a flounder that had cancerous tumours and the fact that "this year, for the first time, cancer has surpassed heart disease as our number one killer." But it is not true, as Suzuki seems to assume. that cancer's rise to leading killer means cancer is killing more people, It is possible that heart disease is killing fewer people. And that turns out to be the correct explanation. Statistics Canada reported that the death rates of both cardiovascular disease and cancer are falling but "much more so for cardiovascular disease."
What's left out here is the simple fact that cancer is primarily a disease of aging, a fact which has a profound effect on cancer statistics. The rate of cancer deaths in Flordia, for example, is almost three times higher than in Alaska, which looks extremely important until youbfactor in Flordia's much older population. "When the cancer death rates for Florida and Alaska are age-adusted," notes a report from the American Cancer Society, "they are almost identical."

In the 1990s, as worries about breast cancer rose, activists of-ten said that "one in eight" Americans women would get breast cancer in their lifetimes. That was true, in a sense. But what wasn't mentioned was that to face that full one-in-eight risk, a woman has to live to 95. The numbers look very different at younger ages: The chance of getting breast cancer by age 70 is 1 in 14 (or 7 per cent); by age 50 it is 1 in 50 (2 per cent); by age 40 1 in 14 (0.4 per cent); by age 30 1 in 2,525 (0.03 per cent). "To emphasize only the highest risk is a tactic meant to scare rather than inform." Russell Harris, a cancer researcher at the University of North Carolina, told U.S. News and World Report aging shouldn't affect data on childhood cancers, however, and those who claim chemical contamination is a serious threat say child- hood cancers are soaring. They are up "25 per cent in the last 30 years," journalist Wendy Mesley said in her CBC documentary. That statistic is true, to a degree, but it is also a classic example of how badly presented information about risk can mislead. Mesley is right that the rate of cancer among Canadian children is roughtly 25 per cent higher now than it was 30 years ago. But what she didn't say is that the increase occurred between 1970 and 1985 and then stopped. ''The overall incidence of childhood cancer has remained relatively stable since 1985," says the 2004 Progress Report on Cancer Control from the Public Health Agency of Canada.

The first step in correcting our mistakes of intuition has to be a healthy respect for the scientific process, Scientists have their biases, too but the whole point of science is that as evidence accumulates, scientists argue among themselves based on the whole body of evidence, not just bits and pieces. Eventually, the majority tentatively decides in one direction or the other. it's not a perfect process, by any means; it's frustratingly slow and it can make mistakes. But it's vastly better than any other method humans have used to understand reality.
The next step in dealing with risk rationally is to accept that risk is inevitable. In Daniel Krewski's surveys, he found that about half the Canadian public agreed that a risk-free world is possible, "A majority of the population expects the government or other regulatory agencies to protect them completely from all risk in their daily lives," he says with more than a hint of amazement in his voice. "Many of us who work in risk management have been trying to get the message out that you cannot guarantee zero risk, it's an impossible goal."

Note: Italics and bold put in by Mr. Duncan

 

Questions:

1. What are the two systems of thought in the brain?

2. Explain the Anchoring Rule using the Gandhi questions.

3. What proof is there for The Rule of Typical Things?

4. Give two examples of evidence supporting the Example Rule

5. Think of a high risk activity and use Slovic's checklist to see how many criteria is has.

6. What did the split - brain patients do?

7. How does the Good-Bad Rule use feelings and words and statistics to affect people's choices?

8. Do frowning or smiling images flashed subconsciously affect how choices? What happens when we try to change those choices once already formed?

9. How did breast implants become dangerous? Are they really dangerous?

10. Explain how confirmation bias and group polarization affect the assessment of risk.

11. How do types of advertising and audiovisual packaging alter views

12. What is the author's message about natural foods and about risk of cancer?

13. What two things should we do to manage risk?