Thinking, Fast and Slow: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 79:
➗ '''21 – Intuitions vs. Formulas.''' Princeton economist Orley Ashenfelter showed how a three‑variable weather rule—summer temperature, harvest rainfall, and prior winter rain—predicts the future prices of Bordeaux vintages with striking accuracy (correlation above .90), outdoing celebrated tasters years or decades later. Paul Meehl’s review of 20 studies had already found that simple statistical combinations routinely beat clinicians and counselors at predicting grades, parole violations, pilot training success, and more. The same lesson appears in the delivery room: Virginia Apgar’s five‑item, 0‑to‑2 scoring checklist standardized newborn assessment and helped cut infant mortality by turning scattered impressions into a consistent rule. Robyn Dawes pushed further, showing that “improper” models with equal weights often match or beat optimally weighted regressions and easily outperform unaided judgment. Humans are inconsistent and context‑sensitive—mood, order effects, and stray cues shift conclusions—whereas formulas return the same answer for the same inputs and don’t tire or improvise. People still resist algorithms, mistaking the vivid feel of expertise for proof of predictive power and clinging to the rare “broken‑leg” exception. The idea is that when environments are noisy and validity is low, disciplined rules deliver more reliable forecasts than expert impressions. The mechanism is noise reduction and proper weighting: System 2 embeds expertise into transparent, repeatable formulas that tame intuitive inconsistency and overfitting. ''The research suggests a surprising conclusion: to maximize predictive accuracy, final decisions should be left to formulas, especially in low‑validity environments.''
 
🧠 '''22 – Expert Intuition: When can we trust it?.''' In Gary Klein’s widely cited firefighting case, a commander and his crew entered a kitchen blaze, began spraying water, and then—without knowing why—heard himself shout, “Let’s get out of here!” Moments after the crew evacuated, the floor collapsed; only later did the commander notice the cues he had registered: an eerily quiet fire and intense heat around his ears, signs of a basement fire beneath them. The episode crystallizes how recognition from long practice can trigger fast, accurate action under pressure. Following Herbert Simon’s account of expertise, thousands of hours of exposure let professionals encode patterns so that the right response comes to mind as readily as a child naming a dog. Such intuitions are reliable only in domains with stable regularities and rapid, informative feedback—like firefighting, chess, anesthesia, and certain kinds of skilled trades. In low-validity environments, such as stock picking or long-range geopolitical forecasting, similar feelings arise but accuracy does not follow, and confidence becomes a poor guide. A productive “adversarial collaboration” with Klein clarifies the rule: trust intuition when the world is sufficiently regular and you have had ample, verified practice; otherwise, slow down and check. The mechanism is memory-driven pattern matching in System 1; when cues map cleanly onto learned structures, speed and accuracy align, but when cues are noisy or the structure drifts, the same feeling of certainty becomes an illusion. Within the book’s theme, expertise and heuristics both yield intuitions; the task is to tell skilled recognition from coherent stories. *Intuition is nothing more and nothing less than recognition.*
🧠 '''22 – Expert Intuition: When can we trust it?.'''
 
🌍 '''23 – The Outside View.''' In the 1970s, a team in Israel—teachers, psychology students, and Seymour Fox of the Hebrew University’s School of Education—met every Friday to write a high‑school textbook on judgment and decision making and privately estimated 18–30 months to complete a draft. When asked to recall comparable projects, Fox reported that about 40% of such teams never finished and that none he knew of finished in under seven years (ten at the outside). The group pressed on; eight years later the manuscript was done, enthusiasm at the Ministry had faded, and the book was never used. The contrast between the confident “inside view” and the sobering “outside view” defines the planning fallacy: we extrapolate from our plan and recent progress and neglect unknown unknowns and base rates. Reference‑class forecasting corrects this by first anchoring on outcomes from a well‑chosen class of similar cases and only then adjusting for case‑specific facts. Psychologically, System 1’s WYSIATI builds a tidy story from what is in sight, while System 2 is needed to retrieve statistics about how such stories usually end. Connecting back to the book’s core, disciplined forecasts demand base rates up front, premortems to surface obstacles, and explicit tolerances for delay and drift. *We should have quit that day.*
🌍 '''23 – The Outside View.'''
 
⚙️ '''24 – The Engine of Capitalism.''' In a large 1988 survey of 2,994 new business owners, Arnold Cooper, Carolyn Woo, and William Dunkelberg found that 81% rated their own venture’s chance of success at 7 out of 10 or better, and fully one‑third called success “dead certain,” while assigning markedly lower odds to ventures like theirs. Colin Camerer and Dan Lovallo’s 1999 experiments then showed what happens when that confidence meets markets: when payoffs depend on relative skill, people overenter and lose, producing “optimistic martyrs” who persist despite poor prospects. Similar patterns appear in a decade‑long survey of U.S. CFOs asked each quarter for an 80% confidence interval for the next year’s S&P 500 return; realized returns fell inside those ranges far less often than 80%, a clean sign of miscalibration. Optimism, however, is not only a bias—it is also the fuel that starts firms, green‑lights projects, and keeps scientists and engineers pushing through failure, which is why economies need some surplus of confidence. The danger comes from competition neglect and the inside view: planners focus on their plan and skill, underrate rivals, and ignore what they don’t know. Mechanistically, System 1 spotlights goals and strengths and jumps to favorable scenarios; System 2 must import base rates, force premortems, and set advance exit rules so that exploration does not become a bonfire of capital. Put back into the book’s frame, progress at the societal level often rides on individual overconfidence—beneficial in the aggregate, costly in the particular. *If you are allowed one wish for your child, seriously consider wishing him or her optimism.*
⚙️ '''24 – The Engine of Capitalism.'''
 
=== IV – Choices ===