Thinking, Fast and Slow
{{#invoke:random|list
| sep=newline | limit=1|
"We can be blind to the obvious, and we are also blind to our blindness."
— {{safesubst:#invoke:Separated entries|comma}}
"Nothing in life is as important as you think it is when you are thinking about it."
— {{safesubst:#invoke:Separated entries|comma}}
"A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth."
— {{safesubst:#invoke:Separated entries|comma}}
"The idea that the future is unpredictable is undermined every day by the ease with which the past is explained."
— {{safesubst:#invoke:Separated entries|comma}}
"When directly compared or weighted against each other, losses loom larger than gains."
— {{safesubst:#invoke:Separated entries|comma}}
"The confidence people have in their beliefs is not a measure of the quality of evidence but of the coherence of the story the mind has managed to construct."
— {{safesubst:#invoke:Separated entries|comma}}
"This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution."
— {{safesubst:#invoke:Separated entries|comma}}
"Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance."
— {{safesubst:#invoke:Separated entries|comma}}
"We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events."
— {{safesubst:#invoke:Separated entries|comma}}
"The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions."
— {{safesubst:#invoke:Separated entries|comma}}
}}
Introduction
{{#invoke:Infobox|infobox}}{{#invoke:Check for unknown parameters|check|unknown=|preview=Page using Template:Infobox book with unknown parameter "_VALUE_"|ignoreblank=y| alt | audio_read_by | author | authors | award | awards | border | caption | congress | country | cover_artist | dewey | editor | editors | english_pub_date | english_release_date | exclude_cover | external_host | external_url | first | full_title | full title | followed_by | followed_by_quotation_marks | genre | genres | homepage | illustrator | illustrators | image | image_caption | image_size | isbn | ISBN | isbn_note | ISBN_note | italic title | language | last | media_type | module | name | native_external_host | native_external_url | native_wikisource | nocat_wdimage | note | notes | oclc | orig_lang_code | pages | preceded_by | preceded_by_quotation_marks | pub_date | pub_place | published | publisher | publisher2 | release_date | release_number | series | set_in | subject | subjects | title_orig | title_working | translator | translators | URL | website | wikisource | goodreads_rating | goodreads_ratings_count | goodreads_url | goodreads_rating_date }}
📘 Thinking, Fast and Slow (2011) is Daniel Kahneman’s plain-spoken guide to how two modes of thought—System 1 (fast, intuitive) and System 2 (slow, deliberative)—shape judgment, choice and well-being. [1] Across five parts and thirty-eight chapters, it synthesizes decades of findings on heuristics and biases, overconfidence, prospect theory and the “two selves,” explaining patterns such as anchoring, availability, regression to the mean, framing and the endowment effect. [2] Its narrative moves from memorable experiments to applications in economics and policy, encouraging readers to spot predictable errors and use ideas like the “outside view” and risk policies to decide better. [1] Reviewers praised its clarity and ambition; *The New Yorker* called it a humane inquiry into the “systematic errors in the thinking of normal people.” [3] The book also reached a wide audience: Macmillan reports more than 2.6 million copies sold, and the Library of Congress notes it landed on the *New York Times* bestseller list and was named one of 2011’s best books by *The Economist*, *The Wall Street Journal* and *The New York Times Book Review*. [4][5]
Chapter summary
This outline follows the Farrar, Straus and Giroux hardcover edition (25 October 2011; ISBN 978-0-374-27563-1).[1]
I – Two Systems
👥 1 – The Characters of the Story. A face on a screen looks furious at a glance while the multiplication 17×24 forces concentration, a contrast that frames the two “characters” of thought. System 1 runs automatically and effortlessly, generating impressions, intentions, and quick associations from scant cues. System 2 allocates attention to demanding tasks, checks impulse, and can take control when needed, but it tires easily. Automatic operations—reading simple words, orienting to a sharp sound, finishing “bread and …”—are the province of System 1. Effortful operations—holding a string of digits, searching memory for a rule, or comparing investment options—draw on System 2’s scarce capacity. Visual illusions with arrow‑tipped lines show how perception delivers a compelling but false impression that even explicit knowledge cannot erase. When System 2 is busy or relaxed, it accepts the suggestions of System 1 and rationalizes them into a coherent story. Together they form a division of labor that mostly works well but also leaves people prone to predictable errors. The central theme is that the fast system’s strengths—speed, pattern completion, and association—become liabilities in uncertainty unless the slow system engages to question the first draft of experience.
🎯 2 – Attention and Effort. The chapter anchors attention with J. Ridley Stroop’s 1930s color‑word conflict, in which naming the ink color of the word “BLUE” printed in red slows responses and produces errors. The interference arises from an automatic act—reading—that effortful control must overcome, and the cost can be watched in real time. Pupil‑tracking experiments show dilation as difficulty rises, then a plateau when the mind nears capacity. When people hold numbers in memory, their pupils stay enlarged and they become more prone to slips, impatience, and missed cues. Christopher Chabris and Daniel Simons’ 1999 “gorilla” video captures the price of focused effort: while counting basketball passes, many viewers fail to notice a person in a gorilla suit walking through the scene. The failure reflects selective attention directed by a goal that screens out the unexpected. Attention is a limited resource commandeered by System 2, so managing one demanding task sharply reduces capacity for others. Because effort is aversive, people naturally economize it, which is why distractions, multitasking, and heavy cognitive load lead to lapses that feel surprising after the fact. The lesson for the book’s larger argument is that a small, effortful controller can be overwhelmed by the ongoing stream of automatic operations, shaping what is seen, remembered, and decided.
🦥 3 – The Lazy Controller. Evidence for a “lazy controller” comes from Roy Baumeister’s late‑1990s Case Western Reserve studies in which hungry volunteers sat with warm cookies and candy but were told to eat only radishes before attempting an impossible puzzle. Those who had resisted the sweets abandoned the puzzle sooner than those allowed to indulge, suggesting that self‑control consumed resources needed for persistence. Similar patterns appear after people inhibit emotion, keep a rigid posture, or monitor their speech—they later take mental shortcuts and avoid difficult tasks. When System 2 is depleted or occupied, it is less willing to interrogate the impulses and stories offered by System 1. In this state people pick the default option, accept the first plausible interpretation, and fail to check for errors they would otherwise catch. The point is not that control is weak but that it behaves like a fatigable muscle that needs rest or renewed motivation. Because the mind prefers to save effort, analytic thinking becomes sporadic and conditional on available energy. The chapter connects that frugality to recurring biases: when the controller is tired, the fast system’s effortless answers go unchallenged and shape judgment.
🧩 4 – The Associative Machine. The mind’s associativity appears in priming: after seeing or hearing “EAT,” people are more likely to complete the fragment “SO_P” as “SOUP,” whereas “WASH” nudges “SOAP.” John Bargh and colleagues at New York University in the mid‑1990s reported that volunteers exposed to scrambled sentences containing words linked to old age then walked more slowly down a corridor, as if the idea of “elderly” had prepared a matching action tendency. In other studies, reminders of money made people more self‑sufficient and less helpful, and exposure to hostile words shaped later interpretations of ambiguous behavior. These effects arise without awareness, travel rapidly along networks of related ideas, and color perception, memory, and motor readiness in a single sweep. Because the network favors coherence, it stitches fragments into a simple story that feels obvious and complete. That rapid storymaking streamlines ordinary life but also seeds biases such as the halo effect and stereotype‑consistent judgments. In the book’s framework, System 1 operates as an associative machine that predicts the next moment from whatever is at hand. Unless System 2 actively questions that first draft, subtle cues can redirect both what is seen and what is done before reasoning begins.
😌 5 – Cognitive Ease. Cognitive ease is the sensation of fluency created by repetition, clarity, and familiarity, and it can be observed in simple laboratory tasks. In “illusion‑of‑truth” experiments, statements heard before—even when flagged as dubious—are rated as more likely to be true on later presentation. At Princeton in 2006, Adam Alter and Daniel Oppenheimer reported that stocks with more pronounceable ticker symbols enjoyed higher early returns, consistent with investors rewarding fluency. The same logic shows up in typography: a high‑contrast, clean font makes instructions feel simpler and more acceptable, while a faint or hard‑to‑read font slows people down and invites scrutiny. Mere exposure shifts liking; a name, logo, or slogan encountered repeatedly acquires a warm, effortless feel that is easily mistaken for accuracy or safety. Mood tracks the effect: comfort and good humor make people more trusting and less vigilant, whereas small doses of difficulty or anxiety cue the slow system to engage. The mechanism matters for truth and risk because the experience of ease is about processing, not reality; it signals “seen before,” not “verified.” The chapter ties this to the book’s larger aim by showing how a fast, fluency‑loving system steers judgments toward the familiar unless an alert, effortful system interrupts to test the claim.
🎉 6 – Norms, Surprises, and Causes. In the 1940s at the Catholic University of Louvain, Albert Michotte used moving shapes to reveal the “launching effect”: when one disk contacted a second and stopped as the other started, observers instantly saw a causal push, and slight delays or gaps made that impression vanish. The demonstration showed that causality can be a percept—switched on or off by tiny spatiotemporal tweaks—rather than a slow inference. In everyday settings, the fast system similarly maintains a model of what is normal and flags deviations within moments. Repeated anomalies quickly feel less surprising because the internal model updates and reduces prediction error. After a surprise, the mind rushes to supply an explanation, often imputing intention or hidden forces even where none exist. Norm theory, developed by Kahneman and Dale Miller, explains why abnormal causes amplify counterfactuals and regret: unusual events make “what almost happened” easy to imagine, sharpening emotion and blame. That story-building impulse helps people navigate complexity but tilts them toward single-cause accounts and away from base rates. The broader point is that System 1 normalizes routine, spotlights departures, and stitches causes on the fly, while the slow system must intervene to ask whether the data truly warrant the tale being told.
🤸 7 – A Machine for Jumping to Conclusions. Shane Frederick’s bat-and-ball problem—published in 2005 in the Journal of Economic Perspectives—shows an intuitive but wrong answer (“10 cents”) arriving effortlessly, while the correct answer (“5 cents”) requires inhibition and a brief calculation. The same pattern appears across the Cognitive Reflection Test: many respondents accept the first fluent response and only a minority recruit effort to correct it. System 1 aims for coherence, not completeness, so it fills gaps, resolves ambiguity, and moves on with confidence that tracks story smoothness rather than evidence. Kahneman labels this habit WYSIATI—“What You See Is All There Is”—to capture how judgments rely on the fragment at hand and ignore missing information. The halo effect magnifies the error, letting one salient trait color our assessments of everything else. Because searching for disconfirming data is costly, the slow system often endorses the fast system’s draft, producing crisp but fragile conclusions. This shortcut is useful in familiar, low-stakes settings, yet risky when situations are novel, stakes are high, or information is one-sided. The chapter’s message is that confidence can be a feeling about narrative coherence, not a sign of reliability, and that reliability demands deliberate checks the mind is reluctant to perform.
⚖️ 8 – How Judgments Happen. At Princeton in 2005, Alexander Todorov and colleagues flashed pairs of U.S. congressional candidates’ faces for about a second and asked which looked more competent; those snap ratings predicted actual election outcomes better than chance. The finding illustrates “basic assessments”: automatic readings of trustworthiness or dominance that System 1 delivers from minimal cues. Often the mind does not answer the target question directly; it substitutes an intensity match—“How much does this person look like a leader?”—for an unobservable criterion—“How effective will this person be in office?” Because scales map neatly across domains (weak→strong, small→large), these matches feel natural and persuasive. When cue validity is high, the substitution works; when cues are weak or misleading, the same fluency fuels confident error. Judging by feel is fast and usually adequate, but it leans on surface regularities and neglects the unseen variables the slow system must collect. The chapter shows that many judgments are effortless transformations of whatever attributes are easiest to read, and accuracy improves when we notice which attribute has been silently swapped in and check whether it truly tracks the one we care about.
🔄 9 – Answering an Easier Question. In a 1983 Journal of Personality and Social Psychology study, Norbert Schwarz and Gerald Clore phoned people on sunny or rainy days and asked about life satisfaction; ratings were higher in good weather, but the effect largely disappeared when interviewers first drew attention to the weather. The pattern reveals attribute substitution: faced with a hard, global question (“How satisfied am I with my life?”), respondents unknowingly answer an easier, local one (“How do I feel right now?”) and misread the result as if it answered the original. Similar swaps occur when fear, familiarity, or fluency bleeds into judgments of risk, quality, or truth, because the easy attribute is ready, vivid, and feels diagnostic. Substitution conserves effort and usually yields a usable response, but it makes answers hostage to context and the availability of momentary feelings. Recognizing the swap—naming the easier question we’re actually answering—creates space for the slow system to gather relevant evidence and correct course. In the book’s larger frame, many biases trace to this quiet exchange between questions, where speed and fluency trump relevance unless attention intervenes.
II – Heuristics and Biases
🔢 10 – The Law of Small Numbers.
⚓ 11 – Anchors.
📊 12 – The Science of Availability.
⚠️ 13 – Availability, Emotion, and Risk.
🎓 14 – Tom W’s Specialty.
👩 15 – Linda: Less is More.
🔗 16 – Causes Trump Statistics.
📉 17 – Regression to the Mean.
🐎 18 – Taming Intuitive Predictions.
III – Overconfidence
🪞 19 – The Illusion of Understanding.
✅ 20 – The Illusion of Validity.
➗ 21 – Intuitions vs. Formulas.
🧠 22 – Expert Intuition: When can we trust it?.
🌍 23 – The Outside View.
⚙️ 24 – The Engine of Capitalism.
IV – Choices
🎲 25 – Bernoulli’s Errors.
📈 26 – Prospect Theory.
🪙 27 – The Endowment Effect.
💥 28 – Bad Events.
🧮 29 – The Fourfold Pattern.
🦄 30 – Rare Events.
🛡️ 31 – Risk Policies.
🏅 32 – Keeping Score.
🔃 33 – Reversals.
🖼️ 34 – Frames and Reality.
V – Two Selves
🫂 35 – Two Selves.
📖 36 – Life as a Story.
🙂 37 – Experienced Well-Being.
🤔 38 – Thinking About Life.
Background & reception
🖋️ Author & writing. Daniel Kahneman is professor of psychology and public affairs emeritus at Princeton, and in 2002 he received the Nobel Prize in Economic Sciences for integrating psychological research into economics, especially judgment under uncertainty. [5][6] The book distills decades of work—much of it with Amos Tversky—on heuristics and biases and prospect theory for a general audience. [7] It frames thinking as two interacting “agents” and is organized into five parts that move from a two-systems primer to heuristics and biases, overconfidence, choices and the “two selves.” [1] The hardcover first edition was published in the United States by Farrar, Straus and Giroux on 25 October 2011 (ISBN 978-0-374-27563-1). [1] Major library records list that first edition at 499 pages. [8] Publisher materials and Kahneman’s own excerpt emphasize a plain, example-driven voice that links lab findings to everyday and policy decisions. [1][9]
📈 Commercial reception. Macmillan reports that the book has sold more than 2.6 million copies. [4] The Library of Congress notes that it reached the *New York Times* bestseller list and was named one of the best books of 2011 by *The Economist*, *The Wall Street Journal* and *The New York Times Book Review*. [5] It won the Los Angeles Times Book Prize for Current Interest (2011) and later the U.S. National Academies Communication Award (Book, 2012). [10][11]
👍 Praise. *The Guardian* lauded it as “an outstanding book” noted for “clarity of detail” and “precision of presentation” (13 December 2011). [12] *The New Yorker* praised its engaging account of our “systematic errors,” describing it as a humane book that nonetheless yields “dismaying” truths about rationality. [3] The LSE Review of Books called it “highly enjoyable and informative,” highlighting how it instills awareness of biases that lead to poor decisions. [13]
👎 Criticism. Methodologists have cautioned against over-interpreting reaction-time and similar measures as evidence for distinct “systems,” urging more careful inference in dual-process research. [14] Others, notably Gerd Gigerenzer, argue that “fast and frugal” heuristics can be adaptive and often outperform complex models, challenging an emphasis on bias. [15] During psychology’s replication crisis, Kahneman himself acknowledged that he had “placed too much faith in underpowered studies” underlying some social-priming results discussed in the book. [16]
🌍 Impact & adoption. The World Bank’s *World Development Report 2015: Mind, Society, and Behavior* embedded “fast and slow” thinking into policy design, explicitly citing Kahneman’s framework. [17] Following the report, the Bank launched eMBeD to apply these insights operationally. [18] In higher education, the book appears on course reading lists and recommended texts, including at Princeton, where a course site lists *Thinking, Fast & Slow* among background readings. [19] Public-sector toolkits have also adopted the System 1/System 2 distinction when training officials in evidence-based policy design. [20]
Related content & more
YouTube videos
CapSach articles
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ 3.0 3.1 {{#invoke:citation/CS1|citation |CitationClass=news }}
- ↑ 4.0 4.1 {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ 5.0 5.1 5.2 {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=news }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=news }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
- ↑ {{#invoke:citation/CS1|citation |CitationClass=web }}
{{#invoke:Check for unknown parameters|check|unknown=|preview=Page using Template:Reflist with unknown parameter "_VALUE_"|ignoreblank=y| 1 | colwidth | group | liststyle | refs }}