Top Quotes: “Thinking, Fast and Slow” — Daniel Kahneman

Austin Rose
49 min readAug 25, 2021

Introduction

People tend to assess the relative importance of issues by the ease with which they’re retrieved from memory — and this is largely determined by the extent of coverage in the media. Frequently mentioned topics populate the mind even as others slip away from awareness. In turn, what the media choose to report corresponds to their view of what is currently on the public’s mind. It’s no accident that authoritarian regimes exert substantial pressure on independent media.”

When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.

The spontaneous search for an intuitive solution sometimes fails — neither an expert solution nor a heuristic answer comes to mind. In such cases we often find ourselves switching to a slower, more deliberate and effortful form of thinking. This is the slow thinking. Fast thinking includes both variants of intuitive thought — the expert and the heuristic — as well as the entirely automatic mental activities of perception and memory, the operations that enable you to know there’s a lamp on your desk or retrieve the name of the capital of Russia.”

“We easily think associativity, we think metaphorically, we think casually, but stats requires thinking about many things at once, which is something that System 1 (fast thinking) is not designed to do.

The difficulties of statistical thinking contribute to a puzzling limitation of our mind: our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We’re prone to overestimate how much we understand about the world and to underestimate the role of chance in events.”

“It’s normally easy and actually quite pleasant to walk and think at the same time, but at the extremes these activities appear to compete for the limited resources of System 2 (slow thinking). You can confirm this claim by a simple experiment. While walking comfortably with a friend, ask him to compute 23 x 78 immediately. He’ll almost certainly stop in his tracks. My experience is that I can think while strolling but cannot engage in mental work that imposes a heavy load on short-term memory. If I must construct an intricate argument under time pressure, I would rather be still, and I’d prefer to be sitting. Of course, not all slow thinking requires that form of intense concentration or effortful computation.

Accelerating beyond my strolling speed completely changes the experience of walking, because the transition to a faster walk brings about a sharp deterioration in my ability to think coherently. As I speed up, my attention is drawn with increasing frequency to the experience of walking and to the deliberate maintenance of the faster pace. My ability to bring a train of thoughts to a conclusion is impaired accordingly. In addition to the physical effort of moving my body rapidly, a mental effort of self-control is needed to resist the urge to slow down. Self-control and deliberate thought apparently draw on the same limited budget of effort.”

“Flow neatly separates the 2 forms of effort: concentration on the task and the deliberate control of attention. Riding a motorcycle at 150 miles an hour and playing a competitive game of chess are certainly very effortful. In a state of flow, however, maintaining focused attention on these absorbing activities requires no exertion of self-control, thereby freeing resources to be directed to the task at hand.”

Self-Control

People who are cognitively busy are more likely to make selfish choices, use sexist language, and make superficial judgments in social situations. Memorizing and repeating digits loosens the hold of System 2 on behavior, but of course cognitive load isn’t the only cause of weakened self-control. A few drinks have the same effect, as does a sleepless night. The self-control of morning people is impaired at night; the reverse is true of night people. Too much concern about how well one is doing in a task sometimes disrupts performance by loading short-term memory with pointless anxious thoughts. The conclusion is straightforward: self-control requires attention and effort.”

“The list of situations and tasks that are now known to deplete self-control is long and varied. All involve conflict and the need to suppress a natural tendency. They include:

  • Avoiding the thought of white bears
  • Inhibiting the emotional response to a stirring film
  • Making a series of choices that involve conflict
  • Trying to impress others
  • Responding kindly to a partner’s bad behavior
  • Interacting with a person of a different race (if you’re prejudiced)

The list of indications of depletion is also highly diverse:

  • Deviating from one’s diet
  • Overspending on impulsive purchases
  • Reacting aggressively to provocation
  • Persisting less time in a handgrip task
  • Performing poorly in cognitive tasks and logical decision making”

Intelligence is not only the ability to reason; it’s also the ability to find relevant material in memory and to deploy attention when needed. Memory function is an attribute of System 1. However, everyone has the option of slowing down to conduct an active search of memory for all possibly relevant facts. The extent of deliberate checking and search is a characteristic of System 2, which varies among individuals.”

“‘Lazy’ is a harsh judgment about the self-monitoring of these young people and their System 2, but it doesn’t seem to be unfair. Those who avoid the sin of intellectual sloth could be called ‘engaged.’ They’re more alert, more intellectually active, less willing to be satisfied with superficially attractive answers, more skeptical about their intuitions, more rational.”

“The core of his argument is that rationality should be distinguished from intelligence. In his view, superficial or ‘lazy’ thinking is a flaw in the reflective mind, a failure of rationality.”

Priming

“In an experiment that became an instant classic, psychologist John Bargh and his collaborators asked NYU students aged 18–22 to assemble 4-word sentences from a set of 5 words (for example, ‘finds he it yellow instantly’). For 1 group of students, half the scrambled sentences contained words associated with the elderly, such as Florida, forgetful, bald, gray, or wrinkle. When they’d completed the task, the young participants were sent out to do another experiment in an office down the hall. That short walk was what the experiment was about. The researchers unobtrusively measured the time it took people to get from one end of the corridor to another. As Bargh had predicted, the young people who’d fashioned a sentence from words with an elderly theme walked down the hallway significantly more slowly than the others.

The ‘Florida effect’ involves 2 stages of priming. First, the set of words primes thoughts of old age, though the word old is never mentioned; second, these thoughts prime a behavior, walking slowly, which is associated with old age. All this happens without any awareness. When they were questioned afterward, none of the students reported noticing that the words had a common theme, and they all insisted that nothing they did after the first experiment could’ve been influenced by the words they’d encountered. The idea of old age had not come to their conscious awareness, but their actions had changed nevertheless. This remarkable priming phenomenon — the influencing of an action by the idea — is known as the ideomotor effect. Although you surely weren’t aware of it, reading this paragraph primed you as well. If you’d needed to stand up to get a glass of water, you would’ve been slightly slower than usual to rise from your chair — unless you happen to dislike the elderly, in which case research suggests that you might’ve been slightly faster than usual!

The ideomotor link also works in reverse. A German study was the mirror image of the NY study. Students were asked to walk around a room for 5 minutes at a rate of 30 steps per minute, which was about 1/3 their normal pace. After this brief experience, the participants were much quicker to recognize words related to old age, such as forgetful, old, and lonely. Reciprocal priming effects tend to produce a coherent reaction: if you were primed to think of old age, you would tend to act old, and acting old would reinforce the thought of old age.

Reciprocal links are common in the associative network. For example, being amused tends to make you smile, and smiling tends to make you feel amused. Go ahead and take a pencil, and hold it between your teeth for a few seconds with the eraser pointing to your right and the point to the left. Now hold the pencil so the point is aimed straight in front of you, by pursing your lips around the eraser end. You were probably unaware that one of these actions forced your face into a frown and the other into a smile. Students were asked to rate the humor of The Far Side cartoons while holding a pencil in their mouth. Those were were ‘smiling’ (without any awareness of doing so) found the cartoons funnier than those who were ‘frowning.’ In another experiment, people whose face was shaped into a frown (by squeezing their eyebrows together) reported an enhanced emotional response to upsetting photos — starving children, people arguing, maimed accident victims.

Simple, common gestures can also unconsciously influence our thoughts and feelings. In one demonstration, people were asked to listen to messages through new headphones. They were told that the purpose of the experiment was to test the quality of the audio equipment and were instructed to move their heads repeatedly to check for any distortions of sound. Half the participants were told to nod their head up and down while others were told to shake it side to side. The messages they heard were radio editorials. Those who nodded tended to accept the message they heard, but those who shook their head tended to reject it. Again, there was no awareness, just a habitual connection between an attitude of rejection or acceptance and its common physical expression. You can see why the common admonition to ‘act calm and kind regardless of how you feel’ is very good advice: you’re likely to be rewarded by actually feeling calm and kind.”

“Studies of priming effects have yielded discoveries that threaten our self-image as conscious and autonomous authors of our judgments and our choices. For instance, most of us think of voting as a deliberate act that reflects our values and our assessments of policies and isn’t influenced by irrelevancies. Our vote shouldn’t be affected by the polling station location, for example, but it is. A study of AZ voting patterns in 2000 showed that the support for props to increase the funding of schools was significantly greater when the polling station was in a school than when it was in a nearby location. A separate experiment showed that exposing people to images of classrooms and school lockers also increased the tendency of participants to support a school initiative. The effect of the images was larger than the difference between parents and other voters! Priming can reach into every corner of our lives.

Reminders of money produce some troubling effects. Participants in 1 experiment were shown a list of 5 words from which they were required to construct a 4-word phrase that had a money theme (‘high a salary desk paying’ became ‘a high-paying salary’). Other primes were much more subtle, including the presence of an irrelevant money-related object in the background, such as a stack of Monopoly money on a table, or a computer with a screen saver of dollar bills floating in water.

Money-primed people became more independent than they would be without the associative trigger. They persevered almost twice as long in trying to solve a very difficult problem before they asked the experimenter for help, a crisp demonstration of increased self-reliance. Money-primed people are also more selfish: they were much less willing to spend time helping another student who pretended to be confused about an experimental task. When an experimenter clumsily dropped a bunch of pencils on the floor, the participants with money (unconsciously) on their mind picked up fewer pencils. In another experiment in the series, participants were told that they would shortly have a get-acquainted conversation with another person and were asked to set up 2 chairs while the experimenter left to retrieve the person. Participants primed by money chose to stay much farther apart than their non-primed peers (188 vs. 80 cm.) Money-primed undergrads also showed a greater preference for being alone.

The general theme of the findings is that the idea of money primes individualism: a reluctance to be involved with others, to depend on others, or to accept demands from others.”

“Her findings suggest that living in a culture that surrounds us with reminders of money may shape our behavior and our attitudes in ways that we don’t know about and of which we may not be proud. Some cultures provide frequent reminders of respect, others constantly remind their members of god, and some societies prime obedience by larger images of the Dear Leader. Can there be any doubt that the ubiquitous portraits of the national leader in dictatorial societies not only convey the feeling that ‘Big Brother is Watching’ but also lead to an actual reduction in spontaneous thought and independent action?

The evidence of priming studies suggests that reminding people of their mortality increases the appeal of authoritarian ideas, which may become reassuring in the context of the terror of death. Other experiments have confirmed Freudian insights about the role of symbols and metaphors in unconscious associations. For example, consider the ambiguous word fragments W_ _ H and S _ _ P. People who were recently asked to think of an action of which they are ashamed are more likely to complete those fragments as WASH and SOAP and less likely to see WISH and SOUP. Furthermore, merely thinking about stabbing a coworker in the back leaves people more inclined to buy soap, disinfectant, or detergent than batteries, juice, or candy bars. Feeling that one’s soul is stained appears to trigger a desire to cleanse one’s body.

The cleansing is highly specific to the body parts involved in a sin. Participants in an experiment were induced to ‘lie’ to an imaginary person, either on the phone or in email. In a subsequent test of the desirability of various products, people who’d lied on the phone preferred mouthwash over soap, and those who’d lied in email preferred soap to mouthwash.”

Persuasion

A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact. But it was psychologists who discovered that you don’t have to repeat the entire statement of a fact or idea to make it appear true. People who were repeatedly exposed to the phrase ‘the body temperature of a chicken’ were more likely to accept as true the statement that ‘the body temperature of a chicken is 144’ (or any other arbitrary number). The familiarity of one phrase in the statement sufficed to make the whole statement feel familiar, and therefore true. If you can’t remember the source of a statement, and have no way to relate it to other things you know, you have no option but to go with the sense of cognitive ease.”

“Suppose you must write a message that you want the recipients to believe. Of course, your message will be true, but that’s not necessarily enough for people to believe that it’s true. It’s entirely legitimate for you to enlist cognitive ease to work in your favor, and studies of truth illusions provide specific suggestions that may help you achieve this goal.

The general principle is that anything you can do to reduce cognitive strain will help, s you should first maximize legibility. Compare these 2 statements:

Hitler was born in 1892.

Hitler was born in 1887.

Both are false (he was born in 1889), but experiments have shown that the first is more likely to be believed. More advice: if your message is to be printed, use high-quality paper to maximize the contrast between characters and their background. If you use color, you’re more likely to be believed if your text is printed in bright blue or red than in middling shades of green, yellow, or pale blue.”

“Aphorisms were judged more insightful when they rhymed than when they didn’t.

If you quote a source, choose one with a name that’s easy to pronounce. Participants in an experiment were asked to evaluate the prospects of fictitious Turkish experiments on the basis of reports from 2 brokerage firms. For each stock, 1 of the reports came from an easily pronounced name (e.g. Artan) and the other came from a firm with an unfortunate name (e.g. Taahhut). The reports sometimes disagreed. The best procedure for the observers would’ve been to average the 2 reports, but this isn’t what they did. They gave much more weight to the Artan report. Remember that System 2 is lazy and that mental effort is aversive. If possible, the recipients of your message want to stay away from anything that reminds them of effort.

All this is very good advice, but we shouldn’t get carried away. High-quality paper, bright colors, and rhyming or simple language won’t be much help if your message is obviously nonsensical, or if it contradicts facts that your audience knows to be true. The psychologists who do these experiments don’t believe that people are stupid or infinitely gullible. What psychologists do believe is that all of us live much of our life guided by the impressions of System 1 — and we often don’t know the source of these impressions. How do you know that a statement is true? If it’s strongly linked by logic or association to other beliefs or preferences you hold, or comes from a source you trust and like, you’ll feel a sense of cognitive ease. The trouble is that there may be other causes for your feeling of ease — including the quality of the font and the appealing rhythm of the prose — and you have no simple way of tracing your feelings to their source. The sense of ease or strain has multiple causes, and it’s difficult to tease them apart. Difficult, but not impossible.”

“When a mysterious series of ads in student newspapers ended, investigators sent questionnaires to the university communities, asking for impressions of whether each of the words ‘means something ‘good’ or something ‘bad.’’ The results were spectacular: the words that were presented more frequently were rated much more favorably than the words that had been shown only once or twice. This finding has been confirmed in many experiments, using Chinese ideographs, faces, or randomly-shaped polygons.

The mere exposure effect doesn’t depend on the conscious experience of familiarity. In fact, the effect doesn’t depend on consciousness at all: it occurs even when the repeated words or pictures are shown so quickly that the observers never become aware of having seen them. They still end up liking the words or pictures presented more frequently. System 1 can respond to impressions of events of which System 2 is unaware. Indeed, the mere exposure effect is actually stronger for stimuli that the individual never consciously sees.

Zajonc argued that the effect of repetition on liking is a profoundly important biological fact, and that it extends to all animals. To survive in a frequently dangerous world, an organism should react cautiously to a novel stimulus, with withdrawal and fear. Survival prospects are poor for an animal that’s not suspicious of novelty. However, it’s also adaptive for the initial caution to fade if the stimulus is actually safe. The mere exposure effect occurs, Zajonc claimed, because the repeated exposure of a stimulus is followed by nothing bad. Such a stimulus will eventually become a safety signal, and safety is good.”

“They found that putting participants in a good mood before the test by having them think happy thoughts more than doubled accuracy. An even more striking result is that unhappy subjects were completely incapable of performing the intuitive task accurately; their guesses were no better than random. Mood evidently affects the operation of System 1: when we’re uncomfortable and unhappy, we lose touch with our intuition.

These findings add to a growing evidence that good mood, intuition, creativity, gullibility, and increased reliance on System 1 form a cluster. At the other pole, sadness, vigilance, suspicion, an analytic approach, and increased effort also go together. A happy mood loosens the control of System 2 over performance: when in a good mood, people become more intuitive and more creative but also less vigilant and more prone to logical errors. Here again, as in the mere exposure effect, the connection makes biological sense. A good mood is a signal that things are generally going well, the environment is safe, and it’s all right to let one’s guard down. A bad mood indicates that things aren’t going well, there may be a threat, and vigilance is required. Cognitive ease is both a cause and a consequence of pleasant feeling.”

“A single incident may make a recurrence less surprising. Some years ago, my wife and I were vacationing in the Great Barrier Reef on an island with only 40 guest rooms. When we came to dinner, we were surprised to meet an acquaintance named Jon. We greeted each other warmly and commented on the coincidence. Jon left the next day. About 2 weeks later, we were in a theater in London. A latecomer sat next to me after the lights went down. When the lights came up for intermission, I saw that my neighbor was Jon. My wife and I commented later that we were simultaneously conscious of two facts: first, this was a more remarkable coincidence than the first meeting; second, we were distinctly less surprised to meet Jon on the second occasion than we’d been on the first. Evidently, the first meeting had somehow changed the idea of Jon in our minds. He was now ‘the guy who shows up when we travel abroad.’ We (System 2) knew this was a ludicrous idea, but our System 1 had made it almost normal to meet Jon in strange places. We would’ve experienced much more surprise if we’d met any acquaintance other than Jon in the next seat of a London theater. By any measure of probability, meeting Jon in the theater was no more likely than meeting any one of our hundreds of acquaintances — yet meeting Jon seemed more normal.”

“Your mind is ready and even eager to identify agents, assign them personality traits and specific intentions, and view their actions as expressing individual propensities. Here again, the evidence is that we’re born prepared to make intentional attributions: infants under age 1 identify bullies and victims, and expect a pursuer to follow the most direct path in attempting to catch whatever it’s chasing.

The experience of freely willed action is quite separate from physical causality. Although it’s your hand that picks up the salt, you don’t think of the event in terms of a chain of physical causation. You experience it as caused by a decision that a disembodied you made, because you wanted to add salt to your food. Many people find it natural to describe their soul as the source and the cause of their actions. Psychologist Paul Bloom presented the provocative claim that our inborn readiness to separate physical and intentional causality explains the near universality of religious beliefs. He observes that ‘we perceive the world of objects as essentially separate from the world of minds, making it possible for us to envision soulless bodies and bodiless souls.’ The two modes of causation that we’re set to perceive make it natural for us to accept the 2 central beliefs of many religions: an inmaterial divinity is the ultimate cause of the physical world, and immortal souls temporarily control our bodies while we live and leave them behind as we die. In Bloom’s view, the 2 concepts of causality were shaped separately by evolutionary forces, building the origins of religion into the structure of System 1.”

“In 1 condition of an experiment subjects were required to hold digits in memory during the task. The disruption of System 2 had a selective effect: it made it difficult for people to ‘unbelieve’ false sentences. In a later memory test, the depleted participants ended up thinking that many of the false sentences were true. The moral is significant: when System 2 is otherwise engaged, we’ll believe almost anything. System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy. Indeed, there’s evidence that people are more likely to be influenced by empty persuasive messages, such as commercials, when they’re tired and depleted.”

The Halo Effect

“If you like the president’s politics, you probably like his voice and appearance as well. The tendency to like (or dislike) everything about a person — including things you haven’t observed — is known as the halo effect, a good name for the common bias that plays a large role in shaping our view of people and situations.”

“The sequence in which we observe characteristics of a person is often determined by chance. Sequence matters, however, because the halo effect increases the weight of first impressions, sometimes to the point that subsequent info is mostly wasted. Early in my career as a professor, I graded students’ essay exams conventionally. I’d pick up 1 test booklet at a time and read all that student’s essays in immediate succession, grading them as I went. I would then compute the total and go on to the next student. I eventually noticed that my evaluations of the essays in each booklet were strikingly homogeneous. I began to suspect that my grading exhibited a halo effect, and that the first question I scored had a disproportionate effect on the overall grade. If I’d given a high score to the first essay, I gave the student the benefit of the doubt whenever I encountered a vague or ambiguous statement later on. This seemed reasonable. Surely a student who’d done so well on the first essay wouldn’t make a foolish mistake in the second one! But there was a serious problem with my way of doing things. If a student had written 2 essays, one strong and one weak, I would end up with different final grades depending on which essay I read first. I had told the students that the 2 essays had equal weight, but that wasn’t true: the first had a much greater impact on the final grade. This was unacceptable.

I adopted a new procedure. Instead of reading the booklets in sequence, I read and scored all the students’ answers to the first question, then went on to the next one. I made sure to write all the scores on the inside back page of the booklet so that I wouldn’t be biased (even unconsciously) when I read the second essay. Soon after switching to the new method, I made a disconcerting observation: my confidence in my grading was now much lower than it had been. The reason was that I frequently experienced a discomfort that was new to me. When I was disappointed with a student’s second essay and went to the back page of the booklet to enter a poor grade, I occasionally noticed that I’d given a top grade to the same student’s first essay. I also noticed that I was tempted to reduce the discrepancy by changing the grade that I hadn’t yet written down.”

“The principle of independent judgments (and decorrelated errors) has immediate applications for the conduct of meetings. A simple rule can help: before an issue is discussed, all members of the committee should be asked to write a very brief summary of their position. This procedure makes good use of the value of the diversity of the knowledge and opinion in the group. The standard practice of open discussion gives too much weight to the opinions of those who speak early and assertively, causing others to line up behind them.”

Consistency

It’s the consistency of info that matters for a great story, not its completeness. Indeed, you’ll often find that knowing little makes it easier to fit everything you know into a coherent pattern.”

“Todorov showed his students pictures of men’s faces, sometimes for as little as 1/10 of a second, and asked them to rate the faces on various attributes, including likability and competence. Observers agreed quite well on those ratings. The faces that Todorov showed weren’t a random set: they were the campaign portraits of politicians competing for elective office. Todorov then compared the results of the electoral races to the ratings of competence that Princeton students had made, based on brief exposure to photos and without any political context. In 70% of the races for senator, congressman, and governor, the election winner was the candidate whose face had earned a higher rating of competence. This striking result was quickly confirmed in Finnish national elections, zoning board elections in the UK, and in various electoral contests in Australia, Germany, and Mexico. Surprisingly, ratings of competence were far more predictive of voting outcomes in Todorov’s study than ratings of likability.

Todorov has found that people judge competence by combining the 2 dimensions of strength and trustworthiness. The faces that exude confidence combine a strong chin with a slight confidence-appearing smile.”

“If a satisfactory answer to a hard question, isn’t found quickly, System 1 will find a related question that’s easier and will answer it.”

Causation

“A random event, by definition, doesn’t lend itself to explanation, but collections of random events do behave in a highly regular fashion. Imagine a large urn filled with marbles — half red, half white. Next, imagine a very patient person who blindly draws 4 marbles from the urn, records the number of red balls in the sample, throws the balls back in the urn, and then does it all again, many times. If you summarize the results, you’ll find that the outcome ‘2 red, 2 white’ occurs (almost exactly) 6 times as often as the outcome ‘4 red’ or ‘4 white.’ This relationship is a mathematical fact. You can predict the outcome of repeated sampling from an urn just as confidently as you can predict what will happen if you hit an egg with a hammer. You can’t predict every detail of how the shell will shatter, but you can be sure of the general idea. There’s a difference: the satisfying sense of causation that you experience when thinking of a hammer hitting an egg is altogether absent when you think about sampling.”

“We started from a fact that calls for a cause: the incidence of kidney cancer varies widely across countries and the differences are systematic. The explanation I offered is statistical: extreme outcomes (both high and low) are more likely to be found in small than in large samples. This explanation isn’t causal. The small population of a county neither causes nor prevents cancer; it merely allows for the incidence of cancer to be much higher (or lower) than it is in the larger population. The deeper truth is that there’s nothing to explain. The incidence of cancer isn’t truly higher or lower than normal in a county with a small population, it just appears to be so in a particular year because of an accident of sampling. If we repeat the analysis next year, we’ll observe the same general pattern of extreme results in the small samples, but the counties where cancer was common last year will not necessarily have a high incidence this year. If this is the case, the differences between dense and rural counties don’t really count as facts: they’re what scientists call artifacts, observations that are produced entirely by some aspect of the method of research — in this case, by differences in sample size.”

Psychologists commonly chose samples so small that they exposed themselves to a 50% risk of failing to confirm their true hypotheses! No researcher in their right mind would accept such a risk. A plausible explanation was that psychologists’ decisions about sample size reflected prevalent intuitive misconceptions of the extent of sampling variation.

Like most research psychologists, I’d routinely chose samples that were too small and had often obtained results that made no sense. Now I knew why. My mistake was particularly embarrassing because I taught stats and knew how to compute the sample size that would reduce the risk of failure to an acceptable level. But I’d never chosen a sample size by computation. Like my colleagues, I’d trusted tradition and my intuition in planning my experiments and had never thought seriously about the issue.”

We don’t expect to see regularity produced by a random process, and when we detect what appears to be a rule, we quickly reject the idea that the process is truly random. Random processes produce many sequences that convince people that the process isn’t random after all. You can see why assuming causality could’ve had evolutionary advantages. It’s part of the general vigilance that we’ve inherited from ancestors. We’re automatically on the lookout for the possibility that the environment has changed. Lions may appear on the plain at random times, but it would be safer to notice and respond to an apparent increase in the rate of appearance of prides of lions, even if it’s actually due to the fluctuations of a random process.”

The Anchoring Effect

“The phenomenon we were studying is so common and important in the everyday world that you should know its name: an anchoring effect. It occurs when people consider a particular value for an unknown quantity before estimating that quantity. What happens is one of the most reliable and robust results of experimental psych: the estimates stay close to the number that people considered — hence the image of an anchor. If you’re asked whether Gandhi was more than 114 years old when he died, you’ll end up with a much higher estimate of his age at death than you would if the anchoring question referred to death at 35. If you consider how much you should pay for a house, you’ll be influenced by the asking price. The same house will appear more valuable if its listing price is high than if it’s low, even if you’re determined to resist the influence of this number; and so on — the list of anchoring effects is endless. Any number that you’re asked to consider as a possible solution to an estimation problem will induce an anchoring effect.”

“Schwarz and his colleagues observed that the task of listing instances may enhance the judgments of the trait by two different routes

  • The number of instances retrieved
  • The ease with which they come to mind

The request to list 12 instances pits the two determinants against each other. On the one hand, you’ve just retrieved an impressive number of cases in which you were assertive. On the other hand, while the first 3–4 instances of your own assertiveness probably came easily to you, you almost certainly struggled to come up with the last few to complete a set of 12; fluency was low. Which will count more — the amount retrieved or the ease and fluency of the retrieval?

The contest yielded a clear-cut winner: people who’d just listed 12 instances rated themselves as less assertive than people who’d listed only 6. Furthermore, participants who had been asked to list 12 cases in which they had not behaved assertively ended up thinking of themselves as quite assertive! If you cannot easily come up with instances of meek behavior, you’re likely to conclude that you aren’t meek at all. Self-ratings were dominated by the ease with which examples had come to mind. The experience of fluent retrieval of instances trumped the number retrieved.”

“People

  • believe that they use their bicycles less often after recalling many rather than a few instances
  • are less confident in a choice when they’re asked to produce more arguments to support it
  • are less confident that an event was avoidable after listing more ways it could’ve been avoided
  • are less impressed by a car after listing many of its advantages

A UCLA professor found an ingenious way to exploit the availability bias. He asked different groups of students to list ways to improve the course, and he varied the required number of improvements. As expected, the students who listed more ways to improve the class rated it higher!

Perhaps the most interesting finding of this paradoxical research is that the paradox isn’t always found: people sometimes go by content rather than by ease of retrieval. The proof that you truly understand a pattern of behavior is that you know how to reverse it. Schwarz and his colleagues took on this challenge of discovering the conditions under which this reversal would take place.

The ease with which instances of assertiveness come to the subject’s mind changes during the task. The first few instances are easy, but retrieval soon becomes much harder. Of course, the subject also expects fluency to drop gradually, but the drop of fluency between 6 and 12 instances appears to be steeper than the participant expected. The results suggest that the participants make an inference: if I’m having so much more trouble than expected coming up with instances of my assertiveness, then I can’t be very assertive. Note that this inference rests on a surprise — fluency being worse than expected.”

Regression From The Mean

“If the correlation between the intelligence of spouses is less than perfect (and if men and women on average don’t differ in intelligence), then it’s a mathematical inevitability that highly intelligent women will be married to husbands who are on average less intelligent than they are (and vice versa, of course). The observed regression to the mean cannot be more interesting or more explainable than the imperfect correlation.”

“She says experience has taught her that criticism is more effective than praise. What she doesn’t understand is that it’s all due to regression to the mean.”

“Perhaps his second interview was less impressive than the first because he was afraid of disappointing us, but more likely it was his first that was unusually good.”

“Our screening procedure is good but not perfect, so we should anticipate regression. We shouldn’t be surprised that the very best candidates often fail to meet our expectations.”

“Intuitive predictions need to be corrected because they’re not regressive and are therefore biased. Suppose that I predict for each golfer in a tournament that his score on day 2 will be the same as his score on day 1. This prediction doesn’t allow for regression to the mean: the golfers who fared well on day 1 will on average do less well on day 2, and those who did poorly will mostly improve. When they’re eventually compared to actual outcomes, nonregressive predictions will be found to be biased. They’re on average overly optimistic for those who did best on the first day and overly pessimistic or those who had a bad start. The predictions are as extreme as the evidence. Similarly, if you use childhood achievement to predict grades in college without regressing your predictions toward the mean, you’ll more often than not be disappointed by the outcomes of early readers and happily surprised by the grades of those who learned to read relatively late. The corrected intuitive predictions eliminate these biases, so that predictions (both high and low) are about equally likely to overestimate and to underestimate the true value. You still make errors when your predictions are unbiased, but the errors are smaller and don’t favor either high or low outcomes.”

“At work here is that powerful WYSIATI rule. You can’t help dealing with the limited info you have as if it were all there is to know. You build the best possible story from the info available to you, and if it’s a good story, you believe it. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.”

Changing Your Mind

“A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that’ve changed. Once you adopt a new view of the world (or any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.

Many psychologists have studied what happens when people change their minds. Choosing a topic on which minds aren’t completely made up — say, the death penalty, the experimenter carefully measures people’s attitudes. Next, the participants see or hear a persuasive pro or con message. Then the experimenter measures people’s attitudes again; they usually are closer to the persuasive message they were exposed to. Finally, the participants report the opinion they held beforehand. This task turns out to be surprisingly difficult. Asked to reconstruct their former beliefs, people retrieve their current ones instead — an instance of substitution — and many cannot believe that they ever felt differently.

Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events. Baruch Bischhoff first demonstrated the ‘I-knew-it-all-along’ effect, or hindsight bias, when he was a student. Together with Ruth Beyth, he conducted a survey before President Nixon visited China and Russia in 1972. The respondents assigned probabilities to 15 possible outcomes of Nixon’s diplomatic initiatives. Would Mao agree to meet with Nixon? Might the US grant diplomatic recognition to China? After decades of enmity, could the US and the Soviet Union agree on anything significant?

After Nixon’s return from his travels, Fischhoff and Beyth asked the same people to recall the probability they’d assigned to each of the 15 possible outcomes. The results were clear. If an event had actually occurred, people exaggerated the probability that they’d assigned to it earlier. If the possible event hadn’t come to pass, they erroneously recalled that they’d always considered it unlikely.”

The Halo Effect

“If the relative success of similar firms was determined entirely by factors that the CEO doesn’t control (call them luck, if you wish), you would find the more successful firm led by the weaker CEO 50% of the time. A correlation of .3 implies that you would find the stronger CEO leading the stronger firm in about 60% of the pairs — an improvement of a mere 10% over random guessing, hardly grist for the hero worship of CEOs we so often witness.

If you expected this value to be higher — and most of us do — then you should take that as an indication that you’re prone to overestimate the predictability of the world you live in. Make no mistake: improving the odds of success from 1:1 to 3:2 is a very significant advantage. From the perspective of most business writers, however, a CEO who has so little control over performance wouldn’t be particularly impressive even if her firm did well. It’s difficult to imagine people lining up to buy a book that enthusiastically describes the practices of business leaders who, on average, do somewhat better than chance. Consumers have a hunger for a clear message about the determinants of success and failure in business, and they need stories that offer a sense of understanding, however illusory.”

“Because of the halo effect, we get the causal relationship backward: we’re prone to believe that the firm fails because its CEO is rigid, when the truth is that the CEO appears to be rigid because the firm is failing.”

“Mutual funds are run by highly experienced and hardworking professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from 50+ years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than playing poker. Typically at least 2 out of every 3 mutual funds underperform the overall market in any given year.

More important, the year-to-year correlation between the outcomes of mutual funds is very small, barely higher than 0. The successful funds in any given year are mostly lucky; they have a good roll of the dice. There’s general agreement among researchers that nearly all stock pickers, whether they know it or not — and few of them do — are playing a game of chance. The subjective experience of traders is that they’re making sensible educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are no better than blind guesses.”

“No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals doing a serious job, and their superiors agreed. We asked execs to guess the year-to-year correlation in the rankings of individual advisors. They thought they knew what was coming and smiled as they said ‘not very high’ or ‘performance certainly fluctuates.’ It quickly became clear, however, that no one expected the average correlation to be 0.

Our message to the execs was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should’ve been shocking news to them, but it wasn’t. There was no sign that they disbelieved us. How could they? After all, we’d analyzed their own results, and they were sophisticated enough to see the implications. We all went on calmly with our dinner, and I have no doubt that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill isn’t only an individual aberration; it’s deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions — and thereby threaten people’s livelihood and self-esteem — are simply not absorbed. The mind doesn’t digest them.”

“Tetlock interviewed 284 people who made their living ‘commenting or offering advice on political and economic trends.’ He asked them to assess the probabilities that certain events would occur in the not too distant future, both in areas of the world in which they'd specialized and in regions about which they had less knowledge. Would Gorbachev be ousted in a coup? Would the US go to war in the Persian Gulf? Which country would become the next big emerging market? In all, he gathered more than 80,000 predictions. he also asked the experts how they’d reached their conclusions, how they reacted when proved wrong, and how they evaluated evidence that didn’t support their positions. Respondents were asked to rate the probabilities of 3 alternative outcomes in every case: the persistence of the status quo, more of something such as political freedom or economic growth, or less of that thing.

The results were devastating. The experts performed worse than they would’ve if they had simply assigned equal probabilities to each of the 3 potential outcomes. In other words, people who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would’ve distributed their choices evenly over the options. Even in the region they knew best, experts were not significantly better than the nonspecialists.

Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically confident.”

Formulas vs. Humans

“We know from studies of priming that unnoticed stimuli in our environment have a substantial influence on our thoughts and actions. These influences fluctuate from moment to moment. The brief pleasure of a cool breeze on a hot day may make you slightly more positive and optimistic about whatever you’re evaluating at the time. The prospects of a convict being granted parole may change significantly during the time that elapses between successive food breaks in the parole judges’ schedule. Because you have very little direct knowledge of what goes on in your mind, you will never know that you might’ve made a different judgment or reached a different decision under very slightly different circumstances. Formulas don’t suffer from such problems. Given the same input, they always return the same answer. When predictability is poor — which it is in most of the studies reviewed by Meehl and his follower — inconsistency is destructive of any predictive validity.

The research suggests a surprising conclusion: to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments. In med school admissions decisions, the final determination is often made by the faculty members who do interviews. The evidence is fragmentary, but there are solid grounds for a conjecture: conducting an interview is likely to diminish the accuracy of a selection procedure, if the interviewers also make the final admissions decisions. Because interviewers are overconfident in their intuitions, they’ll assign too much weight to their personal impressions and too little weight to other sources of info, lowering validity. Similarly, the experts who evaluate the quality of immature wine to predict its future have a source of info that always certainly makes things worse rather than better: they can taste the wine. In addition, of course, even if they have a good understanding of the effects of the weather on wine quality, they won’t be able to maintain the consistency of a formula.”

“Formulas that assign equal weights to all the predictors are often superior, because they aren’t affected by accidents of sampling.

The surprising success of equal-weighting schemes has an important practical implication: it’s possible to develop useful algorithms without any prior statistical research. Simple equally weighted formulas based on existing stats or on common sense are often very good predictors of significant outcomes. In a memorable example, Dawes showed that marital stability is well predicted by a formula:

Frequency of lovemaking minus frequency of quarrels

You don’t want your result to be a negative number.

The important conclusion from this research is that an algorithm that’s constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment. This logic can be applied in many domains, ranging from the selection of stocks by portfolio managers to the choices of medical treatment by doctors or patients.”

Intuition adds value even in the justly derided selection interview, but only after a disciplined collection of objective info and disciplined scoring of separate traits.”

“Suppose that you need to hire a sales rep. If you’re serious about hiring the best possible person for the job, first select a few traits that are prereqs for success in this position (technical proficiency, engaging personality, reliability, and so on). Don’t overdo it — 6 dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you’ll score it, say on a 1–5 scale. You should have an idea of what you’ll call ‘very weak’ or ‘very strong.’

These preparations should take you half an hour or so, a small investment that can make a significant difference in the quality of the people you hire. To avoid halo effects, you must collect the info on 1 trait at a time, scoring each before you move on to the next one. Don’t skip around. To evaluate each candidate, add up the 6 scores.”

  • An environment that’s sufficiently regular to be predictable
  • An opportunity to learn these regularities through prolonged practice

When both these conditions are satisfied, intuitions are likely to be skilled. Chess is an extreme example of a regular environment. Physicians, nurses, athletes, and firefighters also face complex but fundamentally orderly situations. The accurate intuitions are due to highly valid cues that the expert’s System 1 has learned to use, even if System 2 hasn’t learned to name them. In contrast, stock pickers and political scientists who make long-term forecasts operate in a zero-validity environment. Their failures reflect the basic unpredictability of the events that they try to forecast.”

“A 2005 study examined rail projects undertaken worldwide between 1969 and 1998. In 90%+ of the cases, the number of passengers projected to use the system was overestimated. Even though these passenger shortfalls were widely publicized, forecasts didn’t improve over those 30 years; on average, planners overestimated how many people would use new rail projects by 106%, and the average cost overrun was 45%. As more evidence accumulated, the experts didn’t become more reliant on it.

In 2002, a survey of American homeowners who’d remodeled their kitchens found that, on average, they’d expected the job to cost $19k; in fact, they ended up paying an average of $39k.”

Optimism

Most of us view the world as more benign than it really is, our own attributes as more favorable than they truly are, and the goals we adopt as more achievable than they are likely to be. We also tend to exaggerate our ability to forecast the future, which fosters optimistic overconfidence. In terms of its consequences for decisions, the optimistic bias may well be the most significant of the cognitive biases.”

“If you were allowed 1 wish for your child, seriously consider wishing them optimism. Optimists are normally cheerful and happy, and therefore popular; they’re resilient in adapting to failures and hardships, their chances of clinical depression are reduced, their immune system is stronger, they take better care of their health, they feel healthier than others and are in fact likely to live longer. A study of people who exaggerate their expected life span beyond actuarial predictions showed that they work longer hours, are more optimistic about their future income, are more likely to remarry after divorce (the classic ‘triumph’ of hope over experience’), and are more prone to bet on individual stocks. Of course, the blessings of optimism are offered only to individuals who are only mildly biased and who are able to ‘accentuate the positive’ without losing track of reality.

Optimistic individuals play a disproportionate role in shaping our lives. Their decisions make a difference; they are the inventors, the entrepreneurs, the political and military leaders — not average people. They got to where they are by seeking challenges and taking risks. They’re talented and they have been lucky, almost certainly luckier than they acknowledge. They are probably optimistic by temperament; a survey of small business founders concluded that entrepreneurs are more sanguine than midlevel managers about life in general. Their experiences of success have confirmed their faith in their judgment and their ability to control events. Their self-confidence is reinforced by the admiration of others. This reasoning leads to a hypothesis: the people who have the greatest influence on the lives of others are likely to be optimistic and overconfident, and to take more risks than they realize.”

“The evidence suggests that an optimistic bias plays a role — sometimes the dominant role — whenever individuals or institutions voluntarily take on significant risks. More often than not, risk takers underestimate the odds they face, and don’t invest sufficient effort to find out what the odds are. Because they misread the risks, optimistic entrepreneurs often believe they’re prudent, even when they’re not. Their confidence in their future success sustains a positive mood that helps them obtain resources from others, raise the morale of their employees, and enhance their prospects of prevailing. When action is needed, optimism, even of the mildly delusional variety, may be a good thing.”

The chances that a small business will survive for 5 years in the US are about 35%. But the individuals who open such businesses don’t believe the stats apply to them. A survey found that American entrepreneurs tend to believe they’re in a promising line of business: their average estimate of the chances of success for ‘any business like yours’ was 60% — almost double the true value. The bias was more glaring when people assessed the odds of their own venture. Fully 81% of the entrepreneurs put their personal odds of success at 7 out of 10 or higher, and 33% said their chance of failing was zero.

The direction of the bias isn’t surprising. If you interviewed someone who recently opened an Italian restaurant, you wouldn’t expect her to have underestimated her prospects for success or to have a poor view of her ability as a restaurateur. But you must wonder: Would she still have invested money and time if she’d made a reasonable effort to learn the odds — or, if she did learn the odds (60% of new restaurants are out of business after 3 years), paid attention to them? The idea of adopting the outside view probably didn’t occur to her.

One of the benefits of an optimistic temperament is that it encourages persistence in the face of obstacles.”

“Discouraging news led about half of the investors to quit after receiving a grade that unequivocally predicted failure. However, 47% of them continued development efforts even after being told that their project was hopeless, and on average these persistent (or obstinate) individuals doubled their initial losses before giving up. Significantly, persistence after discouraging advice was relatively common among investors who had a high score on a personality measure of optimism — on which investors generally scored higher than the general population. Overall, the return on private invention was small, ‘lower than the return on private equity and on high-risk securities.’ More generally, the financial benefits of self-employment are mediocre: given the same qualifications, people achieve higher average returns by selling their skills to employers than by setting out on their own. The evidence suggests that optimism is widespread, stubborn, and costly.

Psychologists have confirmed that most people genuinely believe that they’re superior to most others on most desirable traits — they’re willing to bet small amounts of money on these beliefs in the lab. In the market, of course, beliefs in one’s superiority have significant consequences. Leaders of large businesses sometimes make huge bets in expensive mergers and acquisitions, acting on the mistaken belief that they can manage the assets of another company better than its current owners do. The stock market commonly responds by downgrading the value of the acquiring firm, because experience has shown that efforts to integrate large firms fail more often than they succeed. The misguided acquisitions have been explained by a ‘hubris hypothesis’: the execs of the acquiring firm are simply less competent than they think they are.

Economists Ulrike Malmendier and Geoffrey Tate identified optimistic CEOs by the amount of company stock that they owned personally and observed that highly optimistic leaders took excessive risks. They assumed debt rather than issue equity and were more likely than others to ‘overpay for target companies and undertake value-destroying mergers.’ Remarkably, the stock of the acquiring company suffered substantially more in mergers if the CEO was overly optimistic by the authors’ measure. The stock market is apparently able to identify overly overconfident CEOs. This observation exonerates the CEOs from one accusation even as it convicts them of another: the leaders of enterprises who make unsound bets don’t do so because they’re betting with other people’s money. On the contrary, they take greater risks when they personally have more at stake. The damage caused by overconfident CEOs is compounded when the business press anoints them as celebrities; the evidence indicates that prestigious press awards to the CEO are costly to stockholders. The authors write, ‘We find that firms with award-winning CEos subsequently underperform, in terms both of stock and of operating performance. At the same time, CEO compensation increases, CEOs spend more time on activities outside the company such as writing books and sitting on outside boards, and they’re more likely to engage in earnings management.’”

Experts who acknowledge the full extent of their ignorance may expect to be replaced by more confident competitors, who are better able to gain the trust of clients. An unbiased appreciation of uncertainty is a cornerstore of rationality — but it’s not what people and orgs want. Extreme uncertainty is paralyzing under dangerous circumstances, and the admission that one is merely guessing is especially unacceptable when the stakes are high. Acting on pretended knowledge is often the preferred solution.

When they come together, the emotional, cognitive, and social actors that support exaggerated optimism are a heady brew, which sometimes leads people to take risks that they’d avoid if they knew the odds.”

Loss Aversion

“Experimental economist John List, who’s studied trading at baseball card conventions, found that novice traders were reluctant to part with the cards they owned, but that this reluctance eventually disappeared with trading experience. More surprisingly, List found a large effect of trading experience on the endowment effect for new goods.

At a convention, List displayed a notice that invited people to take part in a short survey, for which they’d be compensated with a small git: a mug or a chocolate bar of equal value. As the volunteers were able to leave, List said to each, ‘We gave you a mug [or chocolate bar], but you can trade for a chocolate bar [or mug] instead, if you wish.’ List found that only 18% of the inexperienced traders were willing to exchange their gift for the other. IN sharp contrast, experienced traders showed no trace of an endowment effect: 48% of them traded! At least in a market environment in which trading was the norm, they showed no reluctance to trade.

Jack Knetsch also conducted experiments in which subtle manipulations made the endowment effect disappear. Participants displayed an endowment effect only if they had physical possession of the good for a while before the possibility of trading it was mentioned.”

“If you’re set to look for it, the asymmetric intensity of the motives to avoid losses and to achieve gains shows up almost everywhere. It’s an ever-present failure of negotiations, especially of renegotiations of an existing contract, the typical situation in labor negotiations and in international discussions of trade or arms limitations. The existing terms define reference points, and a proposed change in any aspect of the agreement is inevitably viewed as a concession that one side makes to the other. Loss aversion creates an asymmetry that makes agreements difficult to reach. The concessions you make to me are may gains, but they’re your losses; they cause you much more pain than they give me pleasure. Inevitably, you’ll place a higher value on them than I do. The same is true, of course, of the very painful concessions you demand from me, which you don’t appear to value sufficiently! Negotiations over a shrinking pie are especially difficult, because they require an allocation of losses. People tend to be much more easygoing when they bargain over an expanding pie.”

“As initially conceived, plans for reform almost always produce many winners and some losers while achieving an overall improvement. If the affected parties have any political influence, however, potential losers will be more active and determined than potential winners; the outcome will be biased in their favor and inevitably more expensive and less effective than initially planned. Reforms commonly include grandfather clauses that protect current stakeholders — for example, when the existing workforce is reduced by attrition rather than by dismissals, or when cuts in salaries and benefits apply only to future workers. Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals. This conversatism helps keep us stable.”

Denominator Neglect

“As predicted by denominator neglect, low-probability events are much more heavily weighted when described in terms of relative frequencies (how many) than when stated in more abstract terms of ‘chances,’ ‘risk,’ or ‘probability’ (how likely). As we’ve seen, System 1 is much better with dealing with individuals than categories.

The effect of the frequency format is large. In 1 study, people who saw info about ‘a disease that kills 1,286 people out of every 10,000’ judged it as more dangerous than people who were told about ‘a disease that kills 24.1% of the population.’”

“The power of format creates opportunities for manipulation, which people with an axe to grind know how to exploit. Slovic and his colleagues cite an article that states that ‘approximately 1,000 homicides a year are committed nationwide by seriously mentally ill individuals who aren’t taking their meds.’ Another way of expressing the same fact is that ‘1,000 out of every 273,000,000 Americans will die in this manner each year.’ Another is that ‘the annual likelihood of being killed by such an individual is approximately 0.0004%.’ Still another: ‘1,000 Americans will die in this manner each year, or less than 1/30th the number who will die of suicide and about 1/4 the number who will die of laryngeal cancer.’ Slovic points out that ‘these advocates are quite open about their motivation: they want to frighten the general public about violence by people with mental disorder, in the hope that this fear will translate into increased funding for mental health services.’

A good attorney who wishes to cast doubt on DNA evidence won’t tell the jury that ‘the false of a false match is 0.1%.’ The statement that ‘a false match occurs in 1 of 1,000 capital cases’ is far more likely to pass the threshold of reasonable doubt.’”

“The possibility of a rare event will (often, not always be) overestimated, because of the confirmatory bias of memory. Thinking about that event, you try to make it true in your mind. A rare event will be overweighted if it specifically attracts attention. Separate attention is effectively guaranteed when prospects are described explicitly (‘99% chance to win $1,000, and 1% chance to win nothing’). Obsessive concerns (the bus in Jerusalem), vivid images (the roses), concrete representations (1 of 1,000), and explicit numbers (as in choice from description), all contribute to overweighting. And when there’s no overweighting, there will be neglect. When it comes to rare probabilities, our mind isn’t designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, this isn’t good news.”

Framing

People expect to have stronger emotional reactions (including regret) to an outcome that’s produced by action than to the same outcome when it’s produced by inaction. This has been verified in the context of gambling: people expect to be happier if they gamble and win than if they refrain from gambling and get the same amount. The asymmetry is at least as strong for losers, and it applies to blame as well as to regret. The key isn’t the difference between commission and omission but the distinction between default options and actions that deviate from the default. When you deviate from the default, you can easily imagine the norm — and if the default is associated with bad consequences, the discrepancy between the two can be a source of painful emotions. The default option when you own a stock is not to sell it, but the default option when you meet your colleague in the am is to greet him. Selling a stock and failing to greet your coworker are both departures from the default option and natural candidates for regret or blame.”

The physician who prescribes the unusual treatment faces a substantial risk of regret, blame, and perhaps litigation. In hindsight, it will be easier to imagine the normal choice; the abnormal choice will be easy to undo. True, a good outcome will contribute to the reputation of the physician who dared, but the potential benefit is smaller than the potential cost because success is generally a more normal outcome than failure.”

“The intense aversion to trading increased risk for some other advantage plays out on a grand scale in the laws and regulations governing risk. This trend is especially strong in Europe, where the precautionary principle, which prohibits any action that might cause harm, is a widely accepted doctrine. In the regulatory context, the precautionary principle imposes the entire burden of proving safety on anyone who undertakes actions that might harm people or the environment. Multiple international bodies have specified that the absence of scientific evidence of potential damage isn’t sufficient justification for taking risks. As jurist Cass Sunstein points out, the precautionary principle is costly, and when interpreted strictly it can be paralyzing. He mentions an impressive list of innovations that wouldn’t have passed the test, including ‘airplanes, AC, antibiotics, cars, chlorine, the measles vaccine, open-heart surgery, radio, refrigeration, smallpox vaccine, and X-rays.’ The strong version of the precautionary principle is obviously untenable. But enhanced loss aversion is embedded in a strong and widely shared moral intuition; it originates in System 1.”

Losses evoke stronger negative feelings than costs. Choices are not reality-bound because System 1 is not reality-bound.

Richard Thaler described the debate about whether gas stations would be allowed to charge different prices for purchases paid with cash or on credit. The credit-card lobby pushed hard to make differential pricing illegal, but it had fallback position: the difference, if allowed, would be labeled a cash discount, not a credit surcharge. Their psychology was sound: people will more readily forgo a discount than pay a surcharge. The two may be economically equivalent, but they’re not emotionally equivalent.”

“The framing study yielded 3 main findings:

  • A region that is commonly associated with emotional arousal (the amygdala) was most likely to be active when subjects’ choices conformed to the frame. This is just as we’d expect if the emotionally loaded words KEEP and LOSE produce an immediate tendency to approach the sure thing (when it’s framed as a gain) or avoid it (when it’s framed as a loss). The amygdala is accessed very rapidly by emotional stimuli — and it’s a likely suspect for involvement in System 1.
  • A brain region known to be associated with conflict and self-control was more active when subjects didn’t do what comes naturally — when they chose the sure thing in spite of it being labeled LOSE. Resisting the inclination of System 1 apparently involves conflict.
  • The most ‘rational’ subjects — those who were the least susceptible to framing effects — showed enhanced activity in a frontal area of the brain that’s implicated in combining emotion and reasoning to guide decisions. Remarkably, the ‘rational’ individuals weren’t those who showed the strongest neural evidence of conflict. It appears that these elite participants were (often, not always) reality-bound with little conflict.”

“Not all frames are equal, and some frames are clearly better than alternative ways to describe (or to think about) the same thing. Consider the following pair of problems:

A woman has bought two $80 tickets to the theater. When she arrives, she discovers the tickets are missing from her wallet. Will she buy 2 more tickets to see the play?

A woman goes to the theater, intending to buy 2 tickets that cost $80 each. When she arrives, she discovers that the $160 with which she was going to make the purchase is missing from her wallet. She could use her credit card. Will she buy the tickets?

Respondents who only see one version of this problem reach different conclusions, depending on the frame. Most believe that the woman in the first story will go home without seeing the show if she has lost tickets, and most believe that she’ll charge tickets for the show if she’s lost money.

The explanation should already be familiar — this problem involves mental accounting and the sunk-cost fallacy. The different frames evoke different mental accounts, and the significance of the loss depends on the account to which it’s posted. When tickets to a particular show are lost, it’s natural to post them to the account associated with that play. The cost appears to have doubled and may now be more than the experience is worth. In contrast, a loss of cash is charged to a ‘general revenue’ account — the theater patron is slightly poorer than she’d thought she was, and the question she’s likely to ask herself is whether the small reduction in her disposable wealth will change her decision about paying for the tickets. Most respondents thought it wouldn’t.

The version in which cash was lost leads to more reasonable decisions. It’s a better frame because the loss, even if tickets were lost, is ‘sunk,’ and sunk costs should be ignored. History is irrelevant and the only issue that matters is the set of options the theater patron has now, and their likely consequences. Whatever she lost, the relevant fact is that she’s less wealthy than she was before she opened her wallet. If the person who lost tickets were to ask for my advice, I’d say: ‘Would you’ve bought the tickets if you’d lost the equivalent amount of cash? If yes, go ahead and buy new ones.’ Broader frames and inclusive accounts generally lead to more rational decisions.”

“Most car buyers list gas mileage as one of the factors that determine their choice; they know that high-mileage cars have lower operating costs. But the frame that has traditionally been used in the US — miles per gallon — provides very poor guidance to the decisions of both individuals and policy makers. Consider 2 car owners who seek to reduce their costs:

Adam switches from a gas-guzzler of 12 mpg to a slightly less voracious guzzler that runs at 14 mpg.

Environmentally virtuous Beth switches from a 30 mpg car to one that runs at 40 mpg.

Suppose both drivers travel equal distances over a year. Who will save more gas by switching? You almost certainly share the widespread intuition that Beth’s action is more significant than Adam’s: she reduced mpg by 10 miles rather than 2, and by a third (from 30 to 40) rather than a sixth (from 12 to 14). Now engage your System 2 and work it out. If the 2 car owners both drive 10,000 miles, Adam will reduce his consumption from a scandalous 833 gallons to a still shocking 714 gallons, for saving of 119. Beth’s use of fuel will drop from 333 gallons to 250, saving only 83. The mpg frame is wrong, and it should be replaced by the gallons-per-mile frame (or liters-per-100km, which is used in most other countries). The misleading intuitions fostered by the mpg frame are likely to mislead policy makers as well as car buyers.

Under Obama, Cass Sunstein served as administrator of the Office of Info and Regulatory Affairs. With Thaler, he coauthored Nudge, which is the basic manual for applying behavioral economics to policy. It was no accident that the ‘fuel economy and environment’ sticker that will be displayed on every new car starting in 2013 will for the first time in the US use the gallons-per-mile info. Unfortunately, the correct formulation will be in small print, along with the more familiar mpg info in large print, but the move is in the right direction. The 5-year interval between the publication of ‘The MPG Illusion’ and the implementation of a partial correction is probably a speed record for s a significant application of psychological science to public policy.”

--

--

Austin Rose

I read non-fiction and take copious notes. Currently traveling around the world for 5 years, follow my journey at https://peacejoyaustin.wordpress.com/blog/