Range: Difference between revisions
Content deleted Content added
No edit summary |
No edit summary |
||
Line 47:
🛰 '''8 – The Outsider Advantage.''' In 2001, Eli Lilly’s Alph Bingham gathered twenty‑one stubborn chemistry problems and, over internal objections, posted them to an open site; when answers began arriving—during the U.S. anthrax scare—he was happily popping mailed white powders into a spectrometer. A lawyer who had worked on chemical patents solved a synthesis by “thinking of tear gas,” and the experiment was spun out as InnoCentive; about a third of posted challenges were fully solved, especially when framed to attract non‑obvious solvers. The mechanism wasn’t new: in 1795, Parisian confectioner Nicolas Appert—vintner, brewer, chef—boiled sealed bottles and birthed canning decades before Pasteur named microbes, beating scientists via eclectic craft knowledge. NASA later used InnoCentive to improve forecasts of solar particle storms after thirty years of specialist struggle, confirming that problem statements that invite analogy beat narrow “local search.” Inside firms, polymathic inventors like 3M’s Andy Ouderkirk win by merging classes of patents and even writing algorithms to show how breadth predicts breakthrough; across industries, Don Swanson’s “undiscovered public knowledge” is found by people who connect shelved results to live problems. Outsiders and boundary crossers succeed because they re‑frame rather than optimize, importing concepts that specialists overlook under time‑saving routines. The broader the reference class you consult, the more likely you are to find a structure‑level rhyme that unlocks the task at hand. ''Bingham calls it “outside‑in” thinking: finding solutions in experiences far outside of focused training for the problem itself.''
🕹 '''9 – Lateral Thinking with Withered Technology.''' In Kyoto, the hanafuda card maker Nintendo staggered through the 1960s, dabbling in instant rice, taxis, and rent‑by‑the‑hour hotels until a factory maintenance worker, Gunpei Yokoi, turned a shop‑floor gadget into the Ultra Hand toy and paid down debt with 1.2 million sales. A complex electric “Drive Game” then flopped, teaching Yokoi to avoid fragile cutting‑edge parts and to pursue what he called “lateral thinking with withered technology”—cheap, well‑understood components used in novel ways. He wired a store‑bought galvanometer into the Love Tester; he stripped radio‑control to a single channel for the Lefty RX car that only turned left; and he shrank play into a pocket with 1980’s Game & Watch, which sold 43.4 million units and birthed the D‑pad later used on the NES. Watching a salaryman fiddle with a calculator on the Shinkansen, he imagined a discreet handheld, then embossed LCD screens with hundreds of tiny dots to fix “Newton’s rings” and shipped a device adults could play with their thumbs. In 1989 the Game Boy arrived with a 1970s‑era processor, four grayscale shades, a greenish screen, and days‑long battery life, and still crushed color rivals; by century’s end it had sold 118.7 million units—“the Sony Walkman of video gaming.” Even inside Nintendo, Yokoi had to argue that fun and portability would beat specs, and he was right. The chapter treats him as a producer‑generalist who recruited specialists but framed problems broadly, turning constraints into playgrounds. The larger point is that recombining familiar parts invites analogies and transfer, so range becomes a strategy for invention when everyone else chases the arms race. ''“If you draw two circles on a blackboard, and say, ‘That’s a snowman,’ everyone who sees it will sense the white color of the snow.”''
🎓 '''10 – Fooled by Expertise.''' The story opens with a 1980 wager over the fate of humanity: Stanford biologist Paul Ehrlich, confident that scarcity would drive resource prices up, bet against economist Julian Simon, who said prices would fall; a decade later, Simon won. The cautionary tale flows into Philip Tetlock’s decades‑long forecasting studies, where subject‑matter stars—“hedgehogs” who know one big thing—underperform eclectic “foxes” who borrow ideas, quantify uncertainty, and update beliefs. In tournament settings, brief training in foxy habits—reference‑class forecasting, explicit probability ranges, and constant post‑mortems—improves accuracy, while teams that prize “active open‑mindedness” outperform credentialed lone wolves. Psychologist Dan Kahan’s work shows why: more scientific knowledge can harden polarization unless curiosity pushes people to seek disconfirming evidence. Gerd Gigerenzer’s ten‑year analysis of twenty‑two top banks found their euro–dollar year‑end forecasts missed every directional turn and, in most years, the actual rate fell outside all expert ranges. Darwin’s notebooks model the opposite stance: he hunted facts that contradicted his theories and rewrote them. The take‑home is that experience in wicked environments misleads if it narrows attention to pet models; accuracy rewards breadth, humility, and rapid belief revision. The mechanism is disciplined updating and analogy: scanning wide reference classes and treating hunches as hypotheses makes judgment robust when feedback is noisy and delayed. ''“Good judges are good belief updaters.”''
🧯 '''11 – Learning to Drop Your Familiar Tools.''' A Harvard Business School group chews over the Carter Racing case: race on national TV with a turbocharged car that’s failed seven times, or withdraw and lose money; students argue about payoffs while missing how temperature might interact with engine failures. The scenario echoes NASA’s 1986 Challenger launch call, where managers demanded quantification the data couldn’t provide, dismissed qualitative warnings as “away from goodness,” and reverted to a 53‑degree tradition because “we’d flown at 53 before.” Organizational scholar Karl Weick found the same rigidity in disasters where people literally would not drop tools: at Mann Gulch in 1949, thirteen smokejumpers died running uphill with chainsaws and packs; in 1994 on Storm King Mountain, fourteen more perished, some still holding gear within sight of safety. Investigations across fires, flight decks, and ships showed experts clinging to procedures and identities under stress, “regressing to what they know best” even when that fit the wrong situation. The chapter argues for sensemaking over stubborn decision pride: widen the frame, surface missing variables, and create cultures where deviating from the checklist is thinkable when conditions change. The broader lesson is that expertise must be portable; in wicked domains, survival depends on abandoning routines fast enough to rebuild a plan that matches the world in front of you. The mechanism is unlearning plus reframing: shedding overlearned responses frees attention to new cues and enables improvisation that generalists practice by design. ''“Dropping one’s tools is a proxy for unlearning, for adaptation, for flexibility.”''
🎨 '''12 – Deliberate Amateurs.'''
| |||