Thinking, Fast and Slow: Difference between revisions
Content deleted Content added
No edit summary |
No edit summary |
||
Line 1:
| {{Thinking, Fast and Slow/random quote}}
}}
Line 19:
| pages = 512
| isbn = 978-0-374-27563-1
| goodreads_rating = 4.17
| goodreads_rating_date = 8 November 2025
| website = [https://us.macmillan.com/books/9780374275631/thinkingfastandslow us.macmillan.com]
}}
📘 '''''{{Tooltip|Thinking, Fast and Slow}}''''' (2011) is {{Tooltip|Daniel
Across five parts and thirty-eight chapters, it synthesizes decades of findings on {{Tooltip|heuristics and biases}}, overconfidence, {{Tooltip|prospect theory}}, and the
Its narrative moves from memorable experiments to applications in economics and policy and encourages readers to spot predictable errors and use ideas like the
Reviewers praised its clarity and ambition; *{{Tooltip|The New Yorker}}* called it a humane inquiry into the “systematic errors in the thinking of normal people.” <ref name="NewYorker2011">{{cite news |title=Thinking, Fast and Slow |url=https://www.newyorker.com/magazine/2011/11/14/thinking-fast-and-slow |work=The New Yorker |date=6 November 2011 |access-date=8 November 2025}}</ref>
The book also reached a wide audience: {{Tooltip|Macmillan}} reports more than 2.6 million copies sold, and the {{Tooltip|Library of Congress}} notes that it landed on the *New York Times* bestseller list and was named one of 2011’s best books by *{{Tooltip|The Economist}}*, *{{Tooltip|The Wall Street Journal}}*, and *{{Tooltip|The New York Times Book Review}}*. <ref name="MacPB2013">{{cite web |title=Thinking, Fast and Slow (Trade Paperback) |url=https://us.macmillan.com/books/9780374533557/thinkingfastandslow/ |website=Macmillan |publisher=Farrar, Straus and Giroux |date=2 April 2013 |access-date=8 November 2025}}</ref><ref name="LOCNBF2021">{{cite web |title=Daniel Kahneman |url=https://www.loc.gov/events/2021-national-book-festival/authors/item/n81055169/daniel-kahneman/ |website=Library of Congress |publisher=U.S. Government |access-date=8 November 2025}}</ref>
== Chapter summary ==
Line 35:
=== I – Two Systems ===
👥 '''1 – The Characters of the Story.''' A face on a screen looks furious at a glance while the multiplication 17×24 forces concentration, a contrast that frames the two “characters” of thought. {{Tooltip|System 1}} runs automatically and effortlessly, generating impressions, intentions, and quick associations from scant cues. {{Tooltip|System 2}} allocates attention to demanding tasks, checks impulses, and can take control when needed, but it tires easily. Automatic operations—reading simple words, orienting to a sharp sound, finishing “bread and …”—are the province of {{Tooltip|System 1}}. Effortful operations—holding a string of digits, searching memory for a rule, or comparing investment options—draw on {{Tooltip|System
🎯 '''2 – Attention and Effort.''' {{Tooltip|J. Ridley
🦥 '''3 – The Lazy Controller.''' Evidence for a “lazy controller” comes from {{Tooltip|Roy
🧩 '''4 – The Associative Machine.''' The mind’s associativity appears in priming: after seeing or hearing “EAT,” people are more likely to complete the fragment “SO_P” as “SOUP,” whereas “WASH” nudges “SOAP.” John Bargh and colleagues at {{Tooltip|New York University}} in the mid-1990s reported that volunteers exposed to scrambled sentences containing words linked to old age then walked more slowly down a corridor, as if the idea of “elderly” had prepared a matching action tendency. In other studies, reminders of money made people more self-sufficient and less helpful, and exposure to hostile words shaped later interpretations of ambiguous behavior. These effects arise without awareness, travel rapidly along networks of related ideas, and color perception, memory, and motor readiness in a single sweep. Because the network favors coherence, it stitches fragments into a simple story that feels obvious and complete. That rapid storymaking streamlines ordinary life but also seeds biases such as the {{Tooltip|halo effect}} and stereotype-consistent judgments. In this framework, {{Tooltip|System 1}} operates as an {{Tooltip|associative machine}} that predicts the next moment from whatever is at hand. Unless {{Tooltip|System 2}} actively questions that first draft, subtle cues can redirect both what is seen and what is done before reasoning begins.
😌 '''5 – Cognitive Ease.''' Cognitive ease is the sensation of fluency created by repetition, clarity, and familiarity, and it can be observed in simple laboratory tasks. In “illusion-of-truth” experiments, statements heard before—even when flagged as dubious—are rated as more likely to be true on later presentation. At {{Tooltip|Princeton}} in 2006, Adam Alter and {{Tooltip|Daniel Oppenheimer}} reported that stocks with more pronounceable ticker symbols enjoyed higher early returns, consistent with investors rewarding fluency. The same logic shows up in typography: a high-contrast, clean font makes instructions feel simpler and more acceptable, while a faint or hard-to-read font slows people down and invites scrutiny. Mere exposure shifts liking; a name, logo, or slogan encountered repeatedly acquires a warm, effortless feel that is easily mistaken for accuracy or safety. Mood tracks the effect: comfort and good humor make people more trusting and less vigilant, whereas small doses of difficulty or anxiety cue the slow system to engage. Because ease reflects processing rather than reality, it signals “seen before,” not “verified.” A fast, fluency-loving system steers judgments toward the familiar unless an alert, effortful system interrupts to test the claim.
🎉 '''6 – Norms, Surprises, and Causes.''' In the 1940s at the {{Tooltip|Catholic University of Louvain}}, {{Tooltip|Albert Michotte}} used moving shapes to reveal the “launching effect”: when one disk contacted a second and stopped as the other started, observers instantly saw a causal push, and slight delays or gaps made that impression vanish. The demonstration showed that causality can be a percept—switched on or off by tiny spatiotemporal tweaks—rather than a slow inference. In everyday settings, the fast system similarly maintains a model of what is normal and flags deviations within moments. Repeated anomalies quickly feel less surprising because the internal model updates and reduces prediction error. After a surprise, the mind rushes to supply an explanation, often imputing intention or hidden forces even where none exist. Norm theory, developed by {{Tooltip|Daniel Kahneman}} and Dale Miller, explains why abnormal causes amplify counterfactuals and regret: unusual events make “what almost happened” easy to imagine, sharpening emotion and blame. That story-building impulse helps people navigate complexity but tilts them toward single-cause accounts and away from base rates. {{Tooltip|System 1}} normalizes routine, spotlights departures, and stitches causes on the fly; the slow system must check whether the data warrant the tale being told.
🤸 '''7 – A Machine for Jumping to Conclusions.''' {{Tooltip|Shane
⚖️ '''8 – How Judgments Happen.''' At {{Tooltip|Princeton}} in 2005, {{Tooltip|Alexander Todorov}} and colleagues flashed pairs of U.S. congressional candidates’ faces for about a second and asked which looked more competent; those snap ratings predicted actual election outcomes better than chance. The finding illustrates “basic assessments”: automatic readings of trustworthiness or dominance that {{Tooltip|System 1}} delivers from minimal cues. Often the mind does not answer the target question directly; it substitutes an intensity match—“How much does this person look like a leader?”—for an unobservable criterion—“How effective will this person be in office?” Because scales map neatly across domains (weak→strong, small→large), these matches feel natural and persuasive. When cue validity is high, the substitution works; when cues are weak or misleading, the same fluency fuels confident error. Judging by feel is fast and usually adequate, but it leans on surface regularities and neglects unseen variables the slow system must collect. Many judgments are effortless transformations of the easiest attributes; accuracy improves when we spot which attribute was silently swapped in and test whether it truly tracks the one we care about.
🔄 '''9 – Answering an Easier Question.''' In a 1983 *{{Tooltip|Journal of Personality and Social Psychology}}* study, {{Tooltip|Norbert Schwarz}} and {{Tooltip|Gerald Clore}} phoned people on sunny or rainy days and asked about life satisfaction; ratings were higher in good weather, but the effect largely disappeared when interviewers first drew attention to the weather. The pattern reveals attribute substitution: faced with a hard, global question (“How satisfied am I with my life?”), respondents unknowingly answer an easier, local one (“How do I feel right now?”) and misread the result as if it answered the original. Similar swaps occur when fear, familiarity, or fluency bleeds into judgments of risk, quality, or truth, because the easy attribute is ready, vivid, and feels diagnostic. Substitution conserves effort and usually yields a usable response, but it makes answers hostage to context and the availability of momentary feelings. Recognizing the swap—naming the easier question we’re actually answering—creates space for the slow system to gather relevant evidence and correct course. Many biases trace to this quiet exchange between questions, where speed and fluency trump relevance unless attention intervenes.
=== II – Heuristics and Biases ===
🔢 '''10 – The Law of Small Numbers.''' A well-circulated statistical vignette maps kidney cancer across the 3,141 counties of the United States and finds that the very lowest rates cluster in sparsely populated, rural, largely Republican counties—until a second pass shows that the very highest rates cluster there too. The puzzle tempts causal stories about lifestyle or environment, but the simplest explanation is sample size: small populations produce more variable extremes. {{Tooltip|Daniel Kahneman}} ties this to his 1971 work with {{Tooltip|Amos Tversky}} at the {{Tooltip|Hebrew University}}, showing that people—researchers included—expect small samples to mirror the parent population far too closely. The same mistake fueled an education fad: because the top-scoring schools in national comparisons were often small, a major foundation spent heavily to create small high schools; overlooked was that the worst performers were often small as well. In hiring, medicine, and investing, intuitive pattern-spotting prefers neat causes over noisy denominators, so clusters and streaks are overread as meaningful. Even statisticians in their studies gave poor advice about sample sizes for replications, revealing how seductive the error can be. The recurring symptom is overconfidence attached to striking but unrepresentative data. Intuitive judgment underestimates how wildly results can swing when samples are small; a numerate {{Tooltip|System 2}} that attends to sample size keeps randomness from being mistaken for insight.
⚓ '''11 – Anchors.''' {{Tooltip|Amos Tversky}} and
📊 '''12 – The Science of Availability.''' In a 1973 paper, {{Tooltip|Amos Tversky}} and {{Tooltip|Daniel Kahneman}} asked whether more English words begin with the letter K or have K as the third letter; because words that start with K come to mind more easily, many people judged that category as larger, even though the opposite is true in typical texts. In another experiment, listeners heard lists mixing famous and less famous names—say, 19 well-known men and 20 obscure women—and later estimated that the gender associated with famous names had appeared more often. A later program of studies led by {{Tooltip|Norbert Schwarz}} showed that ease of retrieval can outweigh content: when people listed 6 examples of their own assertive behavior, they felt more assertive than those asked to list 12, because producing a dozen felt difficult and the mind used that difficulty as information. The same metacognitive cue appears across domains: repeated headlines, vivid images, and clean typography make claims feel truer because they are processed fluently. {{Tooltip|Availability}} shapes frequency and probability judgments not by counting cases, but by sampling what comes quickly to mind and how easy that felt. It is a helpful shortcut in familiar settings, yet it skews perception whenever salience, recency, or media coverage distort what is retrievable. Minds mistake the experience of recall for a property of the world; a reflective {{Tooltip|System 2}} must ask whether what was easy to remember is also representative.
⚠️ '''13 – Availability, Emotion, and Risk.''' {{Tooltip|Paul Slovic}} and colleagues documented the
🎓 '''14 – Tom W’s Specialty.''' In 1973, {{Tooltip|Amos Tversky}} and {{Tooltip|Daniel Kahneman}} published a set of experiments in *Psychological Review* built around a fictional graduate student named Tom W, whose personality sketch sounded like a stereotypical computer scientist. One group of participants estimated base rates for nine fields of study among first-year U.S. graduate students; another judged how similar Tom W was to typical students in those fields; a third predicted his field. Despite knowing that large programs like education and the humanities enroll many more students than computer science, many respondents ranked Tom W as more likely to be in computer science because the description fit the stereotype. The experiment showed how people leap from a vivid description to a probability judgment without integrating prior odds. Even when base rates were made explicit, judgments gravitated toward resemblance, not frequency. The pattern held whether answers were ranks or numerical probabilities, demonstrating that the mind privileges how well a case fits a category over how many such cases exist. Bayes’s rule would combine prior enrollment shares with the diagnostic value of the description; instead, judgments treated the description as if it were fully reliable. Representativeness drives predictions while base rates are neglected when they feel merely statistical; {{Tooltip|System 2}} often fails to correct the weak link between a sketch and the underlying distribution.
👩 '''15 – Linda: Less is More.''' In 1983, {{Tooltip|Amos Tversky}} and
🔗 '''16 – Causes Trump Statistics.''' A well-known base-rate puzzle asks about a night-time hit-and-run in a city where 85% of cabs are Green and 15% Blue, and a tested witness is 80% accurate at identifying colors; most people say the cab was Blue with 80% probability, ignoring the population split that yields a Bayesian answer near 41%. When the scenario is changed so that both firms are the same size but Green cabs cause about 85% of accidents, judgments swing toward the base rate because it now feels like a causal explanation. The numbers in the two stories are mathematically equivalent, but the mind treats them differently depending on whether they imply a mechanism. People readily weave stereotypes from causal base rates (“Green drivers are reckless”) and discount statistical base rates that lack a story. This preference for causes shows up in legal reasoning, health scares, and everyday attribution, where a single vivid observation trumps a large neutral denominator. The lesson is not to reject causes, but to force statistical and causal information to meet on the same page before deciding. {{Tooltip|System 1}} privileges narratives that link events; {{Tooltip|System 2}} must bring base rates back into judgment when stories run ahead of evidence.
📉 '''17 – Regression to the Mean.''' While working with Israeli Air Force flight instructors, I heard a confident claim that harsh criticism improves performance whereas praise makes it worse—based on observing cadets who often faltered after a superb maneuver and improved after a poor one. The pattern was real, but the explanation was not: performances that include luck tend to be followed by outcomes closer to average, regardless of what instructors say or do. The same tendency appears in athletics (“cover jinxes”), sales streaks, and test–retest scores, where extreme results are naturally followed by less extreme ones. Sir Francis Galton quantified this in 1886 with parent–child height data, showing that exceptional parents have children closer to the population mean. {{Tooltip|Regression to the mean}} is easiest to miss when attention is fixed on individual cases and causal stories—talent, effort, motivation—while variability and noise are overlooked. Punishment then seems to work and reward to fail because changes after extremes are misread as effects of feedback rather than statistics. Good evaluation requires separating skill from luck and comparing outcomes to appropriate baselines over time. {{Tooltip|System 1}} insists on a tale for every rise and fall; only a statistical {{Tooltip|System 2}} corrects for how noise drags extremes back toward the mean.
🐎 '''18 – Taming Intuitive Predictions.''' Consider “Julie,” a precocious reader, and the task of predicting her college GPA years later: most people intuit a high number that matches the impression and ignore how weakly early reading predicts distant outcomes. A more accurate method starts with a baseline (the average GPA for comparable students), forms an intuitive estimate from the available cues, gauges the correlation between cue and target, and then moves only partway from the baseline toward the intuition. When the cue–outcome correlation is modest, extreme intuitive forecasts must be pulled back toward the mean; when it is near zero, the baseline rules. This approach reduces systematic over- and under-shooting that comes from treating impressions as perfectly reliable. It also forces attention to the {{Tooltip|reference
=== III – Overconfidence ===
🪞 '''19 – The Illusion of Understanding.''' A glossy business-press account of Google’s rise strings decisive hires, bold product calls, and near-misses into a single, satisfying arc, giving readers the feeling that the company’s success was inevitable and decipherable. That feeling is a mirage built from selective facts, hindsight, and the {{Tooltip|halo effect}}, which credits leaders with foresight when results are good and faults them when results sour. Outcome knowledge narrows what once felt uncertain into a tidy plot, and
✅ '''20 – The Illusion of Validity.''' Many decades ago, while serving in the Israeli Army’s Psychology Branch, I helped rate officer candidates in a “leaderless group challenge,” a British-designed World War II exercise where eight strangers, stripped of insignia and tagged by number, had to shoulder a long log together and get it over a six-foot wall without letting it touch. Under a scorching sun, my colleagues and I felt sure we could spot future leaders from a few minutes of talk, posture, and initiative. Follow-ups showed our predictions barely beat chance, yet our confidence survived each new batch of evidence. The feeling came from a crisp story—visible traits seemed to map neatly onto military success—so our minds mistook coherence for validity, much like seeing the {{Tooltip|Müller-Lyer illusion}} even after learning the lines are equal. Years later, a 1984 visit to a Wall Street firm revealed the same pattern in stock-picking: enormous effort and training produced strong conviction without durable predictive edge. Across domains, high subjective confidence indicates a well-fitted narrative more than a reliable forecast. Confidence is a feeling about a story’s internal fit, not a calibrated estimate of accuracy; selective coherence keeps {{Tooltip|System 1}} locked on a pattern unless hard feedback forces audit and revision. ''I was so struck by the analogy that I coined a term for our experience: the illusion of validity.''
➗ '''21 – Intuitions vs. Formulas.''' {{Tooltip|Princeton}} economist {{Tooltip|Orley Ashenfelter}} showed how a three-variable weather rule—summer temperature, harvest rainfall, and prior winter rain—predicts the future prices of Bordeaux vintages with striking accuracy (correlation above .90), outdoing celebrated tasters years or decades later. {{Tooltip|Paul
🧠 '''22 – Expert Intuition: When can we trust it?.''' In {{Tooltip|Gary
🌍 '''23 – The Outside View.''' In the 1970s, a team in Israel—teachers, psychology students, and {{Tooltip|Seymour Fox}} of the {{Tooltip|Hebrew
⚙️ '''24 – The Engine of Capitalism.''' In a large 1988 survey of 2,994 new business owners, Arnold Cooper, Carolyn Woo, and William Dunkelberg found that 81% rated their own venture’s chance of success at 7 out of 10 or better, and fully one-third called success “dead certain,” while assigning markedly lower odds to ventures like theirs. {{Tooltip|Colin Camerer}} and {{Tooltip|Dan
=== IV – Choices ===
🎲 '''25 – Bernoulli’s Errors.''' In 1738, {{Tooltip|Daniel Bernoulli}} published “Specimen theoriae novae de mensura sortis” at the {{Tooltip|Imperial Academy of Sciences in Saint Petersburg}}, proposing that people evaluate gambles by the expected utility of wealth rather than by expected monetary value. He modeled utility with a logarithmic curve to capture diminishing marginal value, a move that neatly tamed the St. Petersburg paradox while preserving risk aversion at higher wealth levels. Yet the scheme treated outcomes as final states of wealth and ignored how people experience changes relative to a personal baseline. Everyday choices reveal that small, favorable bets are often rejected because the sting of a potential loss outweighs the pleasure of a comparable gain. Framing the same result as a loss or a gain shifts preference in ways the original utility account cannot explain, because it has no place for reference points. Bernoulli’s approach also cannot accommodate the robust asymmetry that losses feel larger than symmetric gains. Nor does it predict the pattern that people’s risk attitudes flip between gains and losses, or that tiny probabilities are overweighted. These discrepancies forced a revision of the theory to match how judgments are formed in real time. Subjective value depends on where one stands and how outcomes are framed, not only on end wealth; a fast, feeling-driven response to gains and losses must be tempered by a slower accounting of context.
📈 '''26 – Prospect Theory.''' Building on experiments from the 1970s and a formal paper in *{{Tooltip|Econometrica}}* (1979), {{Tooltip|prospect theory}} replaces final-wealth utility with a value function defined on gains and losses around a reference point. The function is concave for gains and convex for losses, and noticeably steeper for losses, capturing the empirical regularity that people dislike losses more than they like equivalent gains. The theory also swaps objective probabilities for {{Tooltip|decision weights}} that overweight small probabilities and underweight moderate to large ones. An “editing” stage—coding outcomes as gains or losses, simplifying combinations, and canceling common parts—helps explain framing reversals that leave expected values unchanged. Together these components account for insurance purchases, lottery play, and the tendency to accept sure gains while gambling to avoid sure losses. The framework unifies otherwise puzzling choices without assuming flawless calculation or stable utility over wealth. It mirrors how judgments are formed with limited attention and strong feelings about change; the slow system can use the framework to anticipate and correct predictable errors.
🪙 '''27 – The Endowment Effect.''' In a series of markets reported by {{Tooltip|Daniel Kahneman}}, Jack Knetsch, and {{Tooltip|Richard Thaler}}, an advanced undergraduate economics class at Cornell University traded goods after first succeeding in “induced value” token markets that verified a clean supply–demand mechanism. When the same procedure turned to Cornell-branded coffee mugs priced at $6 in the bookstore (22 mugs in circulation), the predicted 11 trades failed to appear: across four mug markets, only 4, 1, 2, and 2 trades cleared. Reservation prices revealed the gap: median sellers would not part with a mug for less than about $5.25, while median buyers would pay only about $2.25–$2.75, with market prices between $4.25 and $4.75. Replications, including one with 77 students at {{Tooltip|Simon Fraser University}} using mugs and boxed pens, showed the same two-to-one ratio between willingness to accept and willingness to pay, even with chances to learn. A neutral “chooser” condition—deciding between a mug and money without initial ownership—behaved like buyers, implicating ownership itself rather than budgets or transaction costs. The asymmetry carried into field and survey evidence about fairness and status quo bias, where foregone gains are treated more lightly than out-of-pocket losses. Reference dependence plus loss aversion makes giving up a possession feel heavier than acquiring it; a slower, statistical view can correct for how ownership shifts the baseline.
💥 '''28 – Bad Events.''' In a 2011 *{{Tooltip|American Economic Review}}* paper, economists {{Tooltip|Devin G. Pope}} and {{Tooltip|Maurice E. Schweitzer}} analyzed more than 2.5 million {{Tooltip|PGA Tour}} putts captured by {{Tooltip|ShotLink}} lasers and found that pros were reliably more accurate on par putts than on birdie putts of the same length—evidence that avoiding a bogey (a loss relative to par) draws extra effort. Their field data echoed a broader pattern long cataloged in psychology: bad outcomes and threats command attention and action more than equally sized gains. {{Tooltip|Roy Baumeister}} and colleagues, reviewing results across relationships, feedback, learning, and memory, called this asymmetry “bad is stronger than good,” a theme that shows up whenever setbacks, penalties, or criticism weigh more heavily than comparable rewards. In negotiations and policy disputes, the same tilt stabilizes the status quo because potential losers mobilize more intensely than potential winners. Even when stakes are modest, people pass up favorable bets that involve any chance of loss, or they pay for warranties to fend off small hazards they would otherwise ignore. Outcomes are coded as gains or losses around a current baseline, and the loss side is steeper; negative cues also spread through the associative machinery, priming vigilance and tightening standards. A fast system that prioritizes danger and loss helps people survive, yet it also bends choices toward undue caution unless a slower system reframes the stakes and checks the baseline being used.
🧮 '''29 – The Fourfold Pattern.''' Maurice Allais’s 1953 paradox first spotlighted a
🦄 '''30 – Rare Events.''' When rare hazards dominate the news, as with suicide bombings in {{Tooltip|Israel}} in the early 2000s, many people shun buses or public places despite tiny absolute risks—a social amplification {{Tooltip|Timur Kuran}} and {{Tooltip|Cass Sunstein}} describe as an
🛡️ '''31 – Risk Policies.''' In one experiment, {{Tooltip|University of Chicago}} students were offered a series of bets—winning $10 or losing $5, repeated 100 times. Most refused a single bet but accepted the series, demonstrating that aggregation over time transforms a risky prospect into a near certainty of profit. The pattern mirrors real life: many people exhibit myopic loss aversion, overweighing each small setback instead of viewing the total return. The same bias shows up in investment behavior, where daily monitoring of portfolios amplifies anxiety and discourages optimal risk-taking. Institutions such as insurance companies and pension funds handle risk better by treating it in portfolios rather than as isolated gambles. {{Tooltip|Daniel Kahneman}} and {{Tooltip|Dan Lovallo}} describe “decision isolation” as evaluating choices one by one instead of under a consistent rule. Setting
🏅 '''32 – Keeping Score.''' The
🔃 '''33 – Reversals.''' A recurring pattern in decision research is
🖼️ '''34 – Frames and Reality.''' In the mid-1980s, {{Tooltip|Amos Tversky}} and {{Tooltip|Daniel Kahneman}} collaborated with doctors to test medical framing: when surgery was described as having a 90% survival rate, most patients accepted it; when described as having a 10% mortality rate, most refused. The two statements describe the same reality, yet the emotional tone of words—survival versus death—swings judgment.
=== V – Two Selves ===
🫂 '''35 – Two Selves.''' In 1993 at the {{Tooltip|University of California}}, experiments by {{Tooltip|Daniel Kahneman}}, {{Tooltip|Barbara Fredrickson}}, Charles Schreiber, and {{Tooltip|Donald Redelmeier}} had volunteers endure two versions of a cold-pressor task: one hand submerged in 14 °C water for 60 seconds, and the other for 60 seconds followed by 30 seconds as the water was warmed slightly to 15 °C; most chose to repeat the longer trial because it ended less painfully. In 1996, {{Tooltip|Donald Redelmeier}} and {{Tooltip|Daniel Kahneman}} tracked real-time pain in 154 colonoscopy and 133 lithotripsy patients and found that remembered pain depended mainly on the peak and the final moments, not on total duration. A later randomized trial with more than 600 colonoscopy patients showed that adding a few minutes of milder discomfort at the end led people to rate the entire procedure as less unpleasant and to be more willing to return. These results expose
📖 '''36 – Life as a Story.''' {{Tooltip|Ed Diener}}, {{Tooltip|Derrick Wirtz}}, and {{Tooltip|Shigehiro Oishi}} ({{Tooltip|University of Illinois}}) asked respondents in 2001 to judge “wonderful lives” that ended abruptly versus those with extra years of mild happiness; many preferred the shorter life—a “James Dean effect” showing the dominance of endings in global evaluations. The same logic explains why a symphony spoiled by a scratch at the end is remembered as “ruined” despite a long stretch of enjoyment. Laboratory work on the peak-end rule aligns with this narrative bias: when people summarize experiences, they weight a few snapshots—peaks and the final scene—over duration. In life reviews, distinctive moments—awards, failures, breakups, recoveries—become chapter headings that overshadow long, ordinary stretches. The remembering self smooths plot lines, resolves contradictions, and privileges closure, which is why people accept more total discomfort for a better ending. That storytelling habit brings meaning and coherence but also distorts the arithmetic of lived time. We plan, choose, and judge with an eye to how the story will read later, not how it will feel most of the time; noticing the storyteller’s shortcuts allows better choices and engineered endings without ignoring the hours in between.
🙂 '''37 – Experienced Well-Being.''' To measure how days actually feel, the 2004 *{{Tooltip|Science}}* article introducing the {{Tooltip|Day Reconstruction Method}} ({{Tooltip|Daniel Kahneman}}, Krueger, Schkade, Schwarz, Stone) had 909 employed women reconstruct the prior day in episodes and rate their affect, a diary-like approach that reduces memory distortions. This work led to the {{Tooltip|U-index}} ({{Tooltip|Daniel Kahneman}} &
🤔 '''38 – Thinking About Life.''' David Schkade and {{Tooltip|Daniel
== Background & reception ==
🖋️ '''Author & writing'''. {{Tooltip|Daniel Kahneman}} is professor of psychology and public affairs emeritus at {{Tooltip|Princeton}}, and in 2002 he received the Nobel Prize in Economic Sciences for integrating psychological research into economics, especially judgment under uncertainty. <ref name="LOCNBF2021" /><ref name="Nobel2002">{{cite web |title=The Prize in Economic Sciences 2002 — Press release |url=https://www.nobelprize.org/prizes/economic-sciences/2002/press-release/ |website=NobelPrize.org |publisher=The Royal Swedish Academy of Sciences |date=9 October 2002 |access-date=8 November 2025}}</ref> The book distills decades of work—much of it with {{Tooltip|Amos
📈 '''Commercial reception'''. {{Tooltip|Macmillan}} reports that the book has sold more than 2.6 million copies. <ref name="MacPB2013" /> The {{Tooltip|Library of Congress}} notes that it reached the *New York Times* bestseller list and was named one of the best books of 2011 by *{{Tooltip|The Economist}}*, *{{Tooltip|The Wall Street Journal}}*, and *{{Tooltip|The New York Times Book Review}}*. <ref name="LOCNBF2021" /> It won the {{Tooltip|Los Angeles Times Book Prize for Current Interest}} (2011) and later the {{Tooltip|U.S. National Academies Communication Award}} (Book, 2012). <ref name="LATimes2011">{{cite news |title=2011 Los Angeles Times Book Prize Winners |url=https://www.latimes.com/la-mediagroup-2012-0420-htmlstory.html |work=Los Angeles Times |date=20 April 2012 |access-date=8 November 2025}}</ref><ref name="NAS2012">{{cite web |title=Daniel Kahneman’s *Thinking, Fast and Slow* Wins Best Book Award From Academies |url=https://www.nationalacademies.org/news/2012/09/daniel-kahnemans-thinking-fast-and-slow-wins-best-book-award-from-academies-milwaukee-journal-sentinel-slate-magazine-and-wgbh-nova-also-take-top-prizes-in-awards-10th-year |website=National Academies |date=13 September 2012 |access-date=8 November 2025}}</ref>
== Related content & more ==
=== YouTube videos ===
{{Youtube thumbnail | CjVQJdIrDJ0 | Daniel Kahneman on
{{Youtube thumbnail | UO4BNlFkCZY | Animated summary — Productivity Game
=== CapSach articles ===
| |||