Less Wrong/Article summaries
The following is a list of summaries of all articles from Less Wrong, in chronological order.
The main purpose of this list is to keep track of which articles already have summaries written for them. We may eventually set up a way to automatically add these summaries to other wiki pages that link to the articles.
This list is still in progress. Please feel free to continue filling in this list, or fixing problems with the parts that are already filled in.
This page is compiled from:
- Less Wrong/2006 Articles/Summaries
- Less Wrong/2007 Articles/Summaries
- Less Wrong/2008 Articles/Summaries
- Less Wrong/2009 Articles/Summaries
- 1 2006 Articles
- 2 2007 Articles
- 2.1 Some Claims Are Just Too Extraordinary
- 2.2 Outside the Laboratory
- 2.3 Politics is the Mind-Killer
- 2.4 Just Lose Hope Already
- 2.5 You Are Not Hiring the Top 1%
- 2.6 Policy Debates Should Not Appear One-Sided
- 2.7 Burch's Law
- 2.8 The Scales of Justice, the Notebook of Rationality
- 2.9 Blue or Green on Regulation?
- 2.10 Superstimuli and the Collapse of Western Civilization
- 2.11 Useless Medical Disclaimers
- 2.12 Archimedes's Chronophone
- 2.13 Chronophone Motivations
- 2.14 Self-deception: Hypocrisy or Akrasia?
- 2.15 Tsuyoku Naritai! (I Want To Become Stronger)
- 2.16 Tsuyoku vs. the Egalitarian Instinct
- 2.17 "Statistical Bias"
- 2.18 Useful Statistical Biases
- 2.19 The Error of Crowds
- 2.20 The Majority Is Always Wrong
- 2.21 Knowing About Biases Can Hurt People
- 2.22 Debiasing as Non-Self-Destruction
- 2.23 "Inductive Bias"
- 2.24 Suggested Posts
- 2.25 Futuristic Predictions as Consumable Goods
- 2.26 Marginally Zero-Sum Efforts
- 2.27 Priors as Mathematical Objects
- 2.28 Lotteries: A Waste of Hope
- 2.29 New Improved Lottery
- 2.30 Your Rationality is My Business
- 2.31 Consolidated Nature of Morality Thread
- 2.32 Feeling Rational
- 2.33 Universal Fire
- 2.34 Universal Law
- 2.35 Think Like Reality
- 2.36 Beware the Unsurprised
- 2.37 The Third Alternative
- 2.38 Third Alternatives for Afterlife-ism
- 2.39 Scope Insensitivity
- 2.40 One Life Against the World
- 2.41 Risk-Free Bonds Aren't
- 2.42 Correspondence Bias
- 2.43 Are Your Enemies Innately Evil?
- 2.44 Open Thread
- 2.45 Two More Things to Unlearn from School
- 2.46 Making Beliefs Pay Rent (in Anticipated Experiences)
- 2.47 Belief in Belief
- 2.48 Bayesian Judo
- 2.49 Professing and Cheering
- 2.50 Belief as Attire
- 2.51 Religion's Claim to be Non-Disprovable
- 2.52 The Importance of Saying "Oops"
- 2.53 Focus Your Uncertainty
- 2.54 The Proper Use of Doubt
- 2.55 The Virtue of Narrowness
- 2.56 You Can Face Reality
- 2.57 The Apocalypse Bet
- 2.58 Your Strength as a Rationalist
- 2.59 I Defy the Data!
- 2.60 Absence of Evidence Is Evidence of Absence
- 2.61 Conservation of Expected Evidence
- 2.62 Update Yourself Incrementally
- 2.63 One Argument Against An Army
- 2.64 Hindsight bias
- 2.65 Hindsight Devalues Science
- 2.66 Scientific Evidence, Legal Evidence, Rational Evidence
- 2.67 Is Molecular Nanotechnology "Scientific"?
- 2.68 Fake Explanations
- 2.69 Guessing the Teacher's Password
- 2.70 Science as Attire
- 2.71 Fake Causality
- 2.72 Semantic Stopsigns
- 2.73 Mysterious Answers to Mysterious Questions
- 2.74 The Futility of Emergence
- 2.75 Positive Bias: Look Into the Dark
- 2.76 Say Not "Complexity"
- 2.77 My Wild and Reckless Youth
- 2.78 Failing to Learn from History
- 2.79 Making History Available
- 2.80 Stranger Than History
- 2.81 Explain/Worship/Ignore?
- 2.82 "Science" as Curiosity-Stopper
- 2.83 Absurdity Heuristic, Absurdity Bias
- 2.84 Availability
- 2.85 Why is the Future So Absurd?
- 2.86 Anchoring and Adjustment
- 2.87 The Crackpot Offer
- 2.88 Radical Honesty
- 2.89 We Don't Really Want Your Participation
- 2.90 Applause Lights
- 2.91 Rationality and the English Language
- 2.92 Human Evil and Muddled Thinking
- 2.93 Doublethink (Choosing to be Biased)
- 2.94 Why I'm Blooking
- 2.95 Planning Fallacy
- 2.96 Kahneman's Planning Anecdote
- 2.97 Conjunction Fallacy
- 2.98 Conjunction Controversy (Or, How They Nail It Down)
- 2.99 Burdensome Details
- 2.100 What is Evidence?
- 2.101 The Lens That Sees Its Flaws
- 2.102 How Much Evidence Does It Take?
- 2.103 Einstein's Arrogance
- 2.104 Occam's Razor
- 2.105 9/26 is Petrov Day
- 2.106 How to Convince Me That 2 + 2 = 3
- 2.107 The Bottom Line
- 2.108 What Evidence Filtered Evidence?
- 2.109 Rationalization
- 2.110 Recommended Rationalist Reading
- 2.111 A Rational Argument
- 2.112 We Change Our Minds Less Often Than We Think
- 2.113 Avoiding Your Belief's Real Weak Points
- 2.114 The Meditation on Curiosity
- 2.115 Singlethink
- 2.116 No One Can Exempt You From Rationality's Laws
- 2.117 A Priori
- 2.118 Priming and Contamination
- 2.119 Do We Believe Everything We're Told?
- 2.120 Cached Thoughts
- 2.121 The "Outside the Box" Box
- 2.122 Original Seeing
- 2.123 How to Seem (and Be) Deep
- 2.124 The Logical Fallacy of Generalization from Fictional Evidence
- 2.125 Hold Off On Proposing Solutions
- 2.126 "Can't Say No" Spending
- 2.127 Congratulations to Paris Hilton
- 2.128 Pascal's Mugging: Tiny Probabilities of Vast Utilities
- 2.129 Illusion of Transparency: Why No One Understands You
- 2.130 Self-Anchoring
- 2.131 Expecting Short Inferential Distances
- 2.132 Explainers Shoot High. Aim Low!
- 2.133 Double Illusion of Transparency
- 2.134 No One Knows What Science Doesn't Know
- 2.135 Why Are Individual IQ Differences OK?
- 2.136 Bay Area Bayesians Unite!
- 2.137 Motivated Stopping and Motivated Continuation
- 2.138 Torture vs. Dust Specks
- 2.139 A Case Study of Motivated Continuation
- 2.140 A Terrifying Halloween Costume
- 2.141 Fake Justification
- 2.142 An Alien God
- 2.143 The Wonder of Evolution
- 2.144 Evolutions Are Stupid (But Work Anyway)
- 2.145 Natural Selection's Speed Limit and Complexity Bound
- 2.146 Beware of Stephen J. Gould
- 2.147 The Tragedy of Group Selectionism
- 2.148 Fake Selfishness
- 2.149 Fake Morality
- 2.150 Fake Optimization Criteria
- 2.151 Adaptation-Executers, not Fitness-Maximizers
- 2.152 Evolutionary Psychology
- 2.153 Protein Reinforcement and DNA Consequentialism
- 2.154 Thou Art Godshatter
- 2.155 Terminal Values and Instrumental Values
- 2.156 Evolving to Extinction
- 2.157 No Evolutions for Corporations or Nanodevices
- 2.158 The Simple Math of Everything
- 2.159 Conjuring An Evolution To Serve You
- 2.160 Artificial Addition
- 2.161 Truly Part Of You
- 2.162 Not for the Sake of Happiness (Alone)
- 2.163 Leaky Generalizations
- 2.164 The Hidden Complexity of Wishes
- 2.165 Lost Purposes
- 2.166 Purpose and Pragmatism
- 2.167 The Affect Heuristic
- 2.168 Evaluability (And Cheap Holiday Shopping)
- 2.169 Unbounded Scales, Huge Jury Awards, & Futurism
- 2.170 The Halo Effect
- 2.171 Superhero Bias
- 2.172 Mere Messiahs
- 2.173 Affective Death Spirals
- 2.174 Resist the Happy Death Spiral
- 2.175 Uncritical Supercriticality
- 2.176 Fake Fake Utility Functions
- 2.177 Fake Utility Functions
- 2.178 Evaporative Cooling of Group Beliefs
- 2.179 When None Dare Urge Restraint
- 2.180 The Robbers Cave Experiment
- 2.181 Misc Meta
- 2.182 Every Cause Wants To Be A Cult
- 2.183 Reversed Stupidity Is Not Intelligence
- 2.184 Argument Screens Off Authority
- 2.185 Hug the Query
- 2.186 Guardians of the Truth
- 2.187 Guardians of the Gene Pool
- 2.188 Guardians of Ayn Rand
- 2.189 The Litany Against Gurus
- 2.190 Politics and Awful Art
- 2.191 Two Cult Koans
- 2.192 False Laughter
- 2.193 Effortless Technique
- 2.194 Zen and the Art of Rationality
- 2.195 The Amazing Virgin Pregnancy
- 2.196 Asch's Conformity Experiment
- 2.197 On Expressing Your Concerns
- 2.198 Lonely Dissent
- 2.199 To Lead, You Must Stand Up
- 2.200 Cultish Countercultishness
- 2.201 My Strange Beliefs
- 2.202 End of 2007 articles
- 3 2008 Articles
- 3.1 Posting on Politics
- 3.2 The Two-Party Swindle
- 3.3 The American System and Misleading Labels
- 3.4 Stop Voting For Nincompoops
- 3.5 Rational vs. Scientific Ev-Psych
- 3.6 A Failed Just-So Story
- 3.7 But There's Still A Chance, Right?
- 3.8 The Fallacy of Gray
- 3.9 Absolute Authority
- 3.10 Infinite Certainty
- 3.11 0 And 1 Are Not Probabilities
- 3.12 Beautiful Math
- 3.13 Expecting Beauty
- 3.14 Is Reality Ugly?
- 3.15 Beautiful Probability
- 3.16 Trust in Math
- 3.17 Rationality Quotes 1
- 3.18 Rationality Quotes 2
- 3.19 Rationality Quotes 3
- 3.20 The Allais Paradox
- 3.21 Zut Allais!
- 3.22 Rationality Quotes 4
- 3.23 Allais Malaise
- 3.24 Against Discount Rates
- 3.25 Circular Altruism
- 3.26 Rationality Quotes 5
- 3.27 Rationality Quotes 6
- 3.28 Rationality Quotes 7
- 3.29 Rationality Quotes 8
- 3.30 Rationality Quotes 9
- 3.31 The "Intuitions" Behind "Utilitarianism"
- 3.32 Trust in Bayes
- 3.33 Something to Protect
- 3.34 Newcomb's Problem and Regret of Rationality
- 3.35 OB Meetup: Millbrae, Thu 21 Feb, 7pm
- 3.36 The Parable of the Dagger
- 3.37 The Parable of Hemlock
- 3.38 Words as Hidden Inferences
- 3.39 Extensions and Intensions
- 3.40 Buy Now Or Forever Hold Your Peace
- 3.41 Similarity Clusters
- 3.42 Typicality and Asymmetrical Similarity
- 3.43 The Cluster Structure of Thingspace
- 3.44 Disguised Queries
- 3.45 Neural Categories
- 3.46 How An Algorithm Feels From Inside
- 3.47 Disputing Definitions
- 3.48 Feel the Meaning
- 3.49 The Argument from Common Usage
- 3.50 Empty Labels
- 3.51 Classic Sichuan in Millbrae, Thu Feb 21, 7pm
- 3.52 Taboo Your Words
- 3.53 Replace the Symbol with the Substance
- 3.54 Fallacies of Compression
- 3.55 Categorizing Has Consequences
- 3.56 Sneaking in Connotations
- 3.57 Arguing "By Definition"
- 3.58 Where to Draw the Boundary?
- 3.59 Entropy, and Short Codes
- 3.60 Mutual Information, and Density in Thingspace
- 3.61 Superexponential Conceptspace, and Simple Words
- 3.62 Leave a Line of Retreat
- 3.63 The Second Law of Thermodynamics, and Engines of Cognition
- 3.64 Perpetual Motion Beliefs
- 3.65 Searching for Bayes-Structure
- 3.66 Conditional Independence, and Naive Bayes
- 3.67 Words as Mental Paintbrush Handles
- 3.68 Rationality Quotes 10
- 3.69 Rationality Quotes 11
- 3.70 Variable Question Fallacies
- 3.71 37 Ways That Words Can Be Wrong
- 3.72 Gary Gygax Annihilated at 69
- 3.73 Dissolving the Question
- 3.74 Wrong Questions
- 3.75 Righting a Wrong Question
- 3.76 Mind Projection Fallacy
- 3.77 Probability is in the Mind
- 3.78 The Quotation is not the Referent
- 3.79 Penguicon & Blook
- 3.80 Qualitatively Confused
- 3.81 Reductionism
- 3.82 Explaining vs. Explaining Away
- 3.83 Fake Reductionism
- 3.84 Savanna Poets
- 3.85 Joy in the Merely Real
- 3.86 Joy in Discovery
- 3.87 Bind Yourself to Reality
- 3.88 If You Demand Magic, Magic Won't Help
- 3.89 New York OB Meetup (ad-hoc) on Monday, Mar 24, @6pm
- 3.90 The Beauty of Settled Science
- 3.91 Amazing Breakthrough Day: April 1st
- 3.92 Is Humanism A Religion-Substitute?
- 3.93 Scarcity
- 3.94 To Spread Science, Keep It Secret
- 3.95 Initiation Ceremony
- 3.96 Hand vs. Fingers
- 3.97 Angry Atoms
- 3.98 Heat vs. Motion
- 3.99 Brain Breakthrough! It's Made of Neurons!
- 3.100 Reductive Reference
- 3.101 Zombies! Zombies?
- 3.102 Zombie Responses
- 3.103 The Generalized Anti-Zombie Principle
- 3.104 GAZP vs. GLUT
- 3.105 Belief in the Implied Invisible
- 3.106 Quantum Explanations
- 3.107 Configurations and Amplitude
- 3.108 Joint Configurations
- 3.109 Distinct Configurations
- 3.110 Where Philosophy Meets Science
- 3.111 Can You Prove Two Particles Are Identical?
- 3.112 Classical Configuration Spaces
- 3.113 The Quantum Arena
- 3.114 Feynman Paths
- 3.115 No Individual Particles
- 3.116 Identity Isn't In Specific Atoms
- 3.117 Zombies: The Movie
- 3.118 Three Dialogues on Identity
- 3.119 Decoherence
- 3.120 The So-Called Heisenberg Uncertainty Principle
- 3.121 Which Basis Is More Fundamental?
- 3.122 Where Physics Meets Experience
- 3.123 Where Experience Confuses Physicists
- 3.124 On Being Decoherent
- 3.125 The Conscious Sorites Paradox
- 3.126 Decoherence is Pointless
- 3.127 Decoherent Essences
- 3.128 The Born Probabilities
- 3.129 Decoherence as Projection
- 3.130 Entangled Photons
- 3.131 Bell's Theorem: No EPR "Reality"
- 3.132 Spooky Action at a Distance: The No-Communication Theorem
- 3.133 Decoherence is Simple
- 3.134 Decoherence is Falsifiable and Testable
- 3.135 Quantum Non-Realism
- 3.136 Collapse Postulates
- 3.137 If Many-Worlds Had Come First
- 3.138 Many Worlds, One Best Guess
- 3.139 The Failures of Eld Science
- 3.140 The Dilemma: Science or Bayes?
- 3.141 Science Doesn't Trust Your Rationality
- 3.142 When Science Can't Help
- 3.143 Science Isn't Strict Enough
- 3.144 Do Scientists Already Know This Stuff?
- 3.145 No Safe Defense, Not Even Science
- 3.146 Changing the Definition of Science
- 3.147 Conference on Global Catastrophic Risks
- 3.148 Faster Than Science
- 3.149 Einstein's Speed
- 3.150 That Alien Message
- 3.151 My Childhood Role Model
- 3.152 Mach's Principle: Anti-Epiphenomenal Physics
- 3.153 A Broken Koan
- 3.154 Relative Configuration Space
- 3.155 Timeless Physics
- 3.156 Timeless Beauty
- 3.157 Timeless Causality
- 3.158 Einstein's Superpowers
- 3.159 Class Project
- 3.160 A Premature Word on AI
- 3.161 The Rhythm of Disagreement
- 3.162 Principles of Disagreement
- 3.163 Timeless Identity
- 3.164 Why Quantum?
- 3.165 Living in Many Worlds
- 3.166 Thou Art Physics
- 3.167 Timeless Control
- 3.168 Bloggingheads: Yudkowsky and Horgan
- 3.169 Against Devil's Advocacy
- 3.170 Eliezer's Post Dependencies; Book Notification; Graphic Designer Wanted
- 3.171 The Quantum Physics Sequence
- 3.172 An Intuitive Explanation of Quantum Mechanics
- 3.173 Quantum Physics Revealed As Non-Mysterious
- 3.174 And the Winner is... Many-Worlds!
- 3.175 Quantum Mechanics and Personal Identity
- 3.176 Causality and Moral Responsibility
- 3.177 Possibility and Could-ness
- 3.178 The Ultimate Source
- 3.179 Passing the Recursive Buck
- 3.180 Grasping Slippery Things
- 3.181 Ghosts in the Machine
- 3.182 LA-602 vs. RHIC Review
- 3.183 Heading Toward Morality
- 3.184 The Outside View's Domain
- 3.185 Surface Analogies and Deep Causes
- 3.186 Optimization and the Singularity
- 3.187 The Psychological Unity of Humankind
- 3.188 The Design Space of Minds-In-General
- 3.189 No Universally Compelling Arguments
- 3.190 2-Place and 1-Place Words
- 3.191 The Opposite Sex
- 3.192 What Would You Do Without Morality?
- 3.193 The Moral Void
- 3.194 Created Already In Motion
- 3.195 I'd take it
- 3.196 The Bedrock of Fairness
- 3.197 2 of 10, not 3 total
- 3.198 Moral Complexities
- 3.199 Is Morality Preference?
- 3.200 Is Morality Given?
- 3.201 Will As Thou Wilt
- 3.202 Where Recursive Justification Hits Bottom
- 3.203 The Fear of Common Knowledge
- 3.204 My Kind of Reflection
- 3.205 The Genetic Fallacy
- 3.206 Fundamental Doubts
- 3.207 Rebelling Within Nature
- 3.208 Probability is Subjectively Objective
- 3.209 Lawrence Watt-Evans's Fiction
- 3.210 Posting May Slow
- 3.211 Whither Moral Progress?
- 3.212 The Gift We Give To Tomorrow
- 3.213 Could Anything Be Right?
- 3.214 Existential Angst Factory
- 3.215 Touching the Old
- 3.216 Should We Ban Physics?
- 3.217 Fake Norms, or "Truth" vs. Truth
- 3.218 When (Not) To Use Probabilities
- 3.219 Can Counterfactuals Be True?
- 3.220 Math is Subjunctively Objective
- 3.221 Does Your Morality Care What You Think?
- 3.222 Changing Your Metaethics
- 3.223 Setting Up Metaethics
- 3.224 The Meaning of Right
- 3.225 Interpersonal Morality
- 3.226 Humans in Funny Suits
- 3.227 Detached Lever Fallacy
- 3.228 A Genius for Destruction
- 3.229 The Comedy of Behaviorism
- 3.230 No Logical Positivist I
- 3.231 Anthropomorphic Optimism
- 3.232 Contaminated by Optimism
- 3.233 Hiroshima Day
- 3.234 Morality as Fixed Computation
- 3.235 Inseparably Right; or, Joy in the Merely Good
- 3.236 Sorting Pebbles Into Correct Heaps
- 3.237 Moral Error and Moral Disagreement
- 3.238 Abstracted Idealized Dynamics
- 3.239 "Arbitrary"
- 3.240 Is Fairness Arbitrary?
- 3.241 The Bedrock of Morality: Arbitrary?
- 3.242 Hot Air Doesn't Disagree
- 3.243 When Anthropomorphism Became Stupid
- 3.244 The Cartoon Guide to Lob's Theorem
- 3.245 Dumb Deplaning
- 3.246 You Provably Can't Trust Yourself
- 3.247 No License To Be Human
- 3.248 Invisible Frameworks
- 3.249 Mirrors and Paintings
- 3.250 Unnatural Categories
- 3.251 Magical Categories
- 3.252 Three Fallacies of Teleology
- 3.253 Dreams of AI Design
- 3.254 Against Modal Logics
- 3.255 Harder Choices Matter Less
- 3.256 Qualitative Strategies of Friendliness
- 3.257 Dreams of Friendliness
- 3.258 Brief Break
- 3.259 Rationality Quotes 12
- 3.260 Rationality Quotes 13
- 3.261 The True Prisoner's Dilemma
- 3.262 The Truly Iterated Prisoner's Dilemma
- 3.263 Rationality Quotes 14
- 3.264 Rationality Quotes 15
- 3.265 Rationality Quotes 16
- 3.266 Singularity Summit 2008
- 3.267 Points of Departure
- 3.268 Rationality Quotes 17
- 3.269 Excluding the Supernatural
- 3.270 Psychic Powers
- 3.271 Optimization
- 3.272 My Childhood Death Spiral
- 3.273 My Best and Worst Mistake
- 3.274 Raised in Technophilia
- 3.275 A Prodigy of Refutation
- 3.276 The Sheer Folly of Callow Youth
- 3.277 Say It Loud
- 3.278 Ban the Bear
- 3.279 How Many LHC Failures Is Too Many?
- 3.280 Horrible LHC Inconsistency
- 3.281 That Tiny Note of Discord
- 3.282 Fighting a Rearguard Action Against the Truth
- 3.283 My Naturalistic Awakening
- 3.284 The Level Above Mine
- 3.285 Competent Elites
- 3.286 Above-Average AI Scientists
- 3.287 Friedman's "Prediction vs. Explanation"
- 3.288 The Magnitude of His Own Folly
- 3.289 Awww, a Zebra
- 3.290 Intrade and the Dow Drop
- 3.291 Trying to Try
- 3.292 Use the Try Harder, Luke
- 3.293 Rationality Quotes 18
- 3.294 Beyond the Reach of God
- 3.295 My Bayesian Enlightenment
- 3.296 Bay Area Meetup for Singularity Summit
- 3.297 On Doing the Impossible
- 3.298 Make an Extraordinary Effort
- 3.299 Shut up and do the impossible!
- 3.300 AIs and Gatekeepers Unite!
- 3.301 Crisis of Faith
- 3.302 The Ritual
- 3.303 Rationality Quotes 19
- 3.304 Why Does Power Corrupt?
- 3.305 Ends Don't Justify Means (Among Humans)
- 3.306 Entangled Truths, Contagious Lies
- 3.307 Traditional Capitalist Values
- 3.308 Dark Side Epistemology
- 3.309 Protected From Myself
- 3.310 Ethical Inhibitions
- 3.311 Ethical Injunctions
- 3.312 Prices or Bindings?
- 3.313 Ethics Notes
- 3.314 Which Parts Are "Me"?
- 3.315 Inner Goodness
- 3.316 San Jose Meetup, Sat 10/25 @ 7:30pm
- 3.317 Expected Creative Surprises
- 3.318 Belief in Intelligence
- 3.319 Aiming at the Target
- 3.320 Measuring Optimization Power
- 3.321 Efficient Cross-Domain Optimization
- 3.322 Economic Definition of Intelligence?
- 3.323 Intelligence in Economics
- 3.324 Mundane Magic
- 3.325 BHTV: Jaron Lanier and Yudkowsky
- 3.326 Building Something Smarter
- 3.327 Complexity and Intelligence
- 3.328 Today's Inspirational Tale
- 3.329 Hanging Out My Speaker's Shingle
- 3.330 Back Up and Ask Whether, Not Why
- 3.331 Recognizing Intelligence
- 3.332 Lawful Creativity
- 3.333 Ask OB: Leaving the Fold
- 3.334 Lawful Uncertainty
- 3.335 Worse Than Random
- 3.336 The Weighted Majority Algorithm
- 3.337 Bay Area Meetup: 11/17 8PM Menlo Park
- 3.338 Selling Nonapples
- 3.339 The Nature of Logic
- 3.340 Boston-area Meetup: 11/18/08 9pm MIT/Cambridge
- 3.341 Logical or Connectionist AI?
- 3.342 Whither OB?
- 3.343 Failure By Analogy
- 3.344 Failure By Affective Analogy
- 3.345 The Weak Inside View
- 3.346 The First World Takeover
- 3.347 Whence Your Abstractions?
- 3.348 Observing Optimization
- 3.349 Life's Story Continues
- 3.350 Surprised by Brains
- 3.351 Cascades, Cycles, Insight...
- 3.352 ...Recursion, Magic
- 3.353 The Complete Idiot's Guide to Ad Hominem
- 3.354 Engelbart: Insufficiently Recursive
- 3.355 Total Nano Domination
- 3.356 Thanksgiving Prayer
- 3.357 Chaotic Inversion
- 3.358 Singletons Rule OK
- 3.359 Disappointment in the Future
- 3.360 Recursive Self-Improvement
- 3.361 Hard Takeoff
- 3.362 Permitted Possibilities, & Locality
- 3.363 Underconstrained Abstractions
- 3.364 Sustained Strong Recursion
- 3.365 Is That Your True Rejection?
- 3.366 Artificial Mysterious Intelligence
- 3.367 True Sources of Disagreement
- 3.368 Disjunctions, Antipredictions, Etc.
- 3.369 Bay Area Meetup Wed 12/10 @8pm
- 3.370 The Mechanics of Disagreement
- 3.371 What I Think, If Not Why
- 3.372 You Only Live Twice
- 3.373 BHTV: de Grey and Yudkowsky
- 3.374 For The People Who Are Still Alive
- 3.375 Not Taking Over the World
- 3.376 Visualizing Eutopia
- 3.377 Prolegomena to a Theory of Fun
- 3.378 High Challenge
- 3.379 Complex Novelty
- 3.380 Sensual Experience
- 3.381 Living By Your Own Strength
- 3.382 Rationality Quotes 20
- 3.383 Imaginary Positions
- 3.384 Harmful Options
- 3.385 Devil's Offers
- 3.386 Nonperson Predicates
- 3.387 Nonsentient Optimizers
- 3.388 Nonsentient Bloggers
- 3.389 Can't Unbirth a Child
- 3.390 Amputation of Destiny
- 3.391 Dunbar's Function
- 3.392 A New Day
- 3.393 End of 2008 articles
- 4 2009 Articles
- 4.1 Free to Optimize
- 4.2 The Uses of Fun (Theory)
- 4.3 Growing Up is Hard
- 4.4 Changing Emotions
- 4.5 Rationality Quotes 21
- 4.6 Emotional Involvement
- 4.7 Rationality Quotes 22
- 4.8 Serious Stories
- 4.9 Rationality Quotes 23
- 4.10 Continuous Improvement
- 4.11 Eutopia is Scary
- 4.12 Building Weirdtopia
- 4.13 She has joined the Conspiracy
- 4.14 Justified Expectation of Pleasant Surprises
- 4.15 Seduced by Imagination
- 4.16 Getting Nearer
- 4.17 In Praise of Boredom
- 4.18 Sympathetic Minds
- 4.19 Interpersonal Entanglement
- 4.20 Failed Utopia #4-2
- 4.21 Investing for the Long Slump
- 4.22 Higher Purpose
- 4.23 Rationality Quotes 24
- 4.24 The Fun Theory Sequence
- 4.25 BHTV: Yudkowsky / Wilkinson
- 4.26 31 Laws of Fun
- 4.27 OB Status Update
- 4.28 Rationality Quotes 25
- 4.29 Value is Fragile
- 4.30 Three Worlds Collide (0/8)
- 4.31 The Baby-Eating Aliens (1/8)
- 4.32 War and/or Peace (2/8)
- 4.33 The Super Happy People (3/8)
- 4.34 Interlude with the Confessor (4/8)
- 4.35 Three Worlds Decide (5/8)
- 4.36 Normal Ending: Last Tears (6/8)
- 4.37 True Ending: Sacrificial Fire (7/8)
- 4.38 Epilogue: Atonement (8/8)
- 4.39 The Thing That I Protect
- 4.40 ...And Say No More Of It
- 4.41 (Moral) Truth in Fiction?
- 4.42 Informers and Persuaders
- 4.43 Cynicism in Ev-Psych (and Econ?)
- 4.44 The Evolutionary-Cognitive Boundary
- 4.45 An Especially Elegant Evpsych Experiment
- 4.46 Rationality Quotes 26
- 4.47 An African Folktale
- 4.48 Cynical About Cynicism
- 4.49 Good Idealistic Books are Rare
- 4.50 Against Maturity
- 4.51 Pretending to be Wise
- 4.52 Wise Pretensions v.0
- 4.53 Rationality Quotes 27
- 4.54 Fairness vs. Goodness
- 4.55 On Not Having an Advance Abyssal Plan
- 4.56 About Less Wrong
- 4.57 Formative Youth
- 4.58 Tell Your Rationalist Origin Story
- 4.59 Markets are Anti-Inductive
- 4.60 Issues, Bugs, and Requested Features
- 4.61 The Most Important Thing You Learned
- 4.62 The Most Frequently Useful Thing
- 4.63 That You'd Tell All Your Friends
- 4.64 Test Your Rationality
- 4.65 Unteachable Excellence
- 4.66 The Costs of Rationality
- 4.67 Teaching the Unteachable
- 4.68 No, Really, I've Deceived Myself
- 4.69 The ethic of hand-washing and community epistemic practice
- 4.70 Belief in Self-Deception
- 4.71 Rationality and Positive Psychology
- 4.72 Posting now enabled
- 4.73 Kinnaird's truels
- 4.74 Recommended Rationalist Resources
- 4.75 Information cascades
- 4.76 Is it rational to take psilocybin?
- 4.77 Does blind review slow down science?
- 4.78 Formalization is a rationality technique
- 4.79 Slow down a little... maybe?
- 4.80 Checklists
- 4.81 The Golem
- 4.82 Simultaneously Right and Wrong
- 4.83 Moore's Paradox
- 4.84 It's the Same Five Dollars!
- 4.85 Lies and Secrets
- 4.86 The Mystery of the Haunted Rationalist
- 4.87 Don't Believe You'll Self-Deceive
- 4.88 The Wrath of Kahneman
- 4.89 The Mistake Script
- 4.90 LessWrong anti-kibitzer (hides comment authors and vote counts)
- 4.91 You May Already Be A Sinner
- 4.92 Striving to Accept
- 4.93 Software tools for community truth-seeking
- 4.94 Wanted: Python open source volunteers
- 4.95 Selective processes bring tag-alongs (but not always!)
- 4.96 Adversarial System Hats
- 4.97 Beginning at the Beginning
- 4.98 The Apologist and the Revolutionary
- 4.99 Raising the Sanity Waterline
- 4.100 So you say you're an altruist...
- 4.101 A Sense That More Is Possible
- 4.102 Talking Snakes: A Cautionary Tale
- 4.103 Boxxy and Reagan
- 4.104 Dialectical Bootstrapping
- 4.105 Is Santa Real?
- 4.106 Epistemic Viciousness
- 4.107 On the Care and Feeding of Young Rationalists
- 4.108 The Least Convenient Possible World
- 4.109 Closet survey #1
- 4.110 Soulless morality
- 4.111 The Skeptic's Trilemma
- 4.112 Schools Proliferating Without Evidence
- 4.113 Really Extreme Altruism
- 4.114 Storm by Tim Minchin
- 4.115 3 Levels of Rationality Verification
- 4.116 The Tragedy of the Anticommons
- 4.117 Are You a Solar Deity?
- 4.118 In What Ways Have You Become Stronger?
- 4.119 Taboo "rationality," please.
- 4.120 Science vs. art
- 4.121 What Do We Mean By "Rationality"?
- 4.122 Comments for "Rationality"
- 4.123 The "Spot the Fakes" Test
- 4.124 On Juvenile Fiction
- 4.125 Rational Me or We?
- 4.126 Dead Aid
- 4.127 Tarski Statements as Rationalist Exercise
- 4.128 The Pascal's Wager Fallacy Fallacy
- 4.129 Never Leave Your Room
- 4.130 Rationalist Storybooks: A Challenge
- 4.131 A corpus of our community's knowledge
- 4.132 Little Johny Bayesian
- 4.133 How to Not Lose an Argument
- 4.134 Counterfactual Mugging
- 4.135 Rationalist Fiction
- 4.136 Rationalist Poetry Fans, Unite!
- 4.137 Precommitting to paying Omega.
- 4.138 Why Our Kind Can't Cooperate
- 4.139 Just a reminder: Scientists are, technically, people.
- 4.140 Support That Sounds Like Dissent
- 4.141 Tolerate Tolerance
- 4.142 Mind Control and Me
- 4.143 Individual Rationality Is a Matter of Life and Death
- 4.144 The Power of Positivist Thinking
- 4.145 Don't Revere The Bearer Of Good Info
- 4.146 You're Calling *Who* A Cult Leader?
- 4.147 Cached Selves
- 4.148 Eliezer Yudkowsky Facts
- 4.149 When Truth Isn't Enough
- 4.150 BHTV: Yudkowsky & Adam Frank on "religious experience"
- 4.151 I'm confused. Could someone help?
- 4.152 Playing Video Games In Shuffle Mode
- 4.153 Book: Psychiatry and the Human Condition
- 4.154 Thoughts on status signals
- 4.155 Bogus Pipeline, Bona Fide Pipeline
- 4.156 On Things that are Awesome
- 4.157 Hyakujo's Fox
- 4.158 Terrorism is not about Terror
- 4.159 The Implicit Association Test
- 4.160 Contests vs. Real World Problems
- 4.161 The Sacred Mundane
- 4.162 Extreme updating: The devil is in the missing details
- 4.163 Spock's Dirty Little Secret
- 4.164 The Good Bayesian
- 4.165 Fight Biases, or Route Around Them?
- 4.166 Why *I* fail to act rationally
- 4.167 Open Thread: March 2009
- 4.168 Two Blegs
- 4.169 Your Price for Joining
- 4.170 Sleeping Beauty gets counterfactually mugged
- 4.171 The Mind Is Not Designed For Thinking
- 4.172 Crowley on Religious Experience
- 4.173 Can Humanism Match Religion's Output?
- 4.174 On Seeking a Shortening of the Way
- 4.175 Altruist Coordination -- Central Station
- 4.176 Less Wrong Facebook Page
- 4.177 The Hidden Origins of Ideas
- 4.178 Defense Against The Dark Arts: Case Study #1
- 4.179 Church vs. Taskforce
- 4.180 When It's Not Right to be Rational
- 4.181 The Zombie Preacher of Somerset
- 4.182 Hygienic Anecdotes
- 4.183 Rationality: Common Interest of Many Causes
- 4.184 Ask LW: What questions to test in our rationality questionnaire?
- 4.185 Bay area OB/LW meetup, today, Sunday, March 29, at 5pm
- 4.186 Akrasia, hyperbolic discounting, and picoeconomics
- 4.187 Deliberate and spontaneous creativity
- 4.188 Most Rationalists Are Elsewhere
- 4.189 Framing Effects in Anthropology
- 4.190 Kling, Probability, and Economics
- 4.191 Helpless Individuals
- 4.192 The Benefits of Rationality?
- 4.193 Money: The Unit of Caring
- 4.194 Building Communities vs. Being Rational
- 4.195 Degrees of Radical Honesty
- 4.196 Introducing CADIE
- 4.197 Purchase Fuzzies and Utilons Separately
- 4.198 Proverbs and Cached Judgments: the Rolling Stone
- 4.199 You don't need Kant
- 4.200 Accuracy Versus Winning
- 4.201 Wrong Tomorrow
- 4.202 Selecting Rationalist Groups
- 4.203 Aumann voting; or, How to vote when you're ignorant
- 4.204 "Robot scientists can think for themselves"
- 4.205 Where are we?
- 4.206 The Brooklyn Society For Ethical Culture
- 4.207 Open Thread: April 2009
- 4.208 Rationality is Systematized Winning
- 4.209 Another Call to End Aid to Africa
- 4.210 First London Rationalist Meeting upcoming
- 4.211 On dollars, utility, and crack cocaine
- 4.212 Incremental Progress and the Valley
- 4.213 The First London Rationalist Meetup
- 4.214 Why Support the Underdog?
- 4.215 Off-Topic Discussion Thread: April 2009
- 4.216 Voting etiquette
- 4.217 Formalizing Newcomb's
- 4.218 Supporting the underdog is explained by Hanson's Near/Far distinction
- 4.219 Real-Life Anthropic Weirdness
- 4.220 Rationalist Wiki
- 4.221 Rationality Toughness Tests
- 4.222 Heuristic is not a bad word
- 4.223 Rationalists should beware rationalism
- 4.224 Newcomb's Problem standard positions
- 4.225 Average utilitarianism must be correct?
- 4.226 Rationalist wiki, redux
- 4.227 What do fellow rationalists think about Mensa?
- 4.228 Extenuating Circumstances
- 4.229 On Comments, Voting, and Karma - Part I
- 4.230 Newcomb's Problem vs. One-Shot Prisoner's Dilemma
- 4.231 What isn't the wiki for?
- 4.232 Eternal Sunshine of the Rational Mind
- 4.233 Of Lies and Black Swan Blowups
- 4.234 Whining-Based Communities
- 4.235 Help, help, I'm being oppressed!
- 4.236 Zero-based karma coming through
- 4.237 E-Prime
- 4.238 Mandatory Secret Identities
- 4.239 Rationality, Cryonics and Pascal's Wager
- 4.240 Less Wrong IRC Meetup
- 4.241 "Stuck In The Middle With Bruce"
- 4.242 Extreme Rationality: It's Not That Great
- 4.243 "Playing to Win"
- 4.244 Secret Identities vs. Groupthink
- 4.245 Silver Chairs, Paternalism, and Akrasia
- 4.246 Extreme Rationality: It Could Be Great
- 4.247 The uniquely awful example of theism
- 4.248 Beware of Other-Optimizing
- 4.249 How theism works
- 4.250 That Crisis thing seems pretty useful
- 4.251 Spay or Neuter Your Irrationalities
- 4.252 The Unfinished Mystery of the Shangri-La Diet
- 4.253 Akrasia and Shangri-La
- 4.254 Maybe Theism Is OK
- 4.255 Metauncertainty
- 4.256 Is masochism necessary?
- 4.257 Missed Distinctions
- 4.258 Toxic Truth
- 4.259 Too much feedback can be a bad thing
- 4.260 Twelve Virtues booklet printing?
- 4.261 How Much Thought
- 4.262 Awful Austrians
- 4.263 Sunk Cost Fallacy
- 4.264 It's okay to be (at least a little) irrational
- 4.265 Marketing rationalism
- 4.266 Bystander Apathy
- 4.267 Persuasiveness vs Soundness
- 4.268 Declare your signaling and hidden agendas
- 4.269 GroupThink, Theism ... and the Wiki
- 4.270 Collective Apathy and the Internet
- 4.271 Tell it to someone who doesn't care
- 4.272 Bayesians vs. Barbarians
- 4.273 Actions and Words: Akrasia and the Fruit of Self-Knowledge
- 4.274 Mechanics without wrenches
- 4.275 I Changed My Mind Today - Canned Laughter
- 4.276 Of Gender and Rationality
- 4.277 Welcome to Less Wrong!
- 4.278 Instrumental Rationality is a Chimera
- 4.279 Practical rationality questionnaire
- 4.280 My Way
- 4.281 The Art of Critical Decision Making
- 4.282 The Trouble With "Good"
- 4.283 While we're on the subject of meta-ethics...
- 4.284 Chomsky on reason and science
- 4.285 Anti-rationality quotes
- 4.286 Two-Tier Rationalism
- 4.287 My main problem with utilitarianism
- 4.288 Just for fun - let's play a game.
- 4.289 Rationality Quotes - April 2009
- 4.290 The Epistemic Prisoner's Dilemma
- 4.291 How a pathological procrastinor can lose weight (Anti-akrasia)
- 4.292 Atheist or Agnostic?
- 4.293 Great Books of Failure
- 4.294 Weekly Wiki Workshop and suggested articles
- 4.295 The True Epistemic Prisoner's Dilemma
- 4.296 Spreading the word?
- 4.297 The ideas you're not ready to post
- 4.298 Evangelical Rationality
- 4.299 The Sin of Underconfidence
- 4.300 Masochism vs. Self-defeat
- 4.301 Well-Kept Gardens Die By Pacifism
- 4.302 UC Santa Barbara Rationalists Unite - Saturday, 6PM
- 4.303 LessWrong Boo Vote (Stochastic Downvoting)
- 4.304 Proposal: Use the Wiki for Concepts
- 4.305 Escaping Your Past
- 4.306 Go Forth and Create the Art!
- 4.307 Fix it and tell us what you did
- 4.308 This Didn't Have To Happen
- 4.309 Just a bit of humor...
- 4.310 What's in a name? That which we call a rationalist...
- 4.311 Rational Groups Kick Ass
- 4.312 Instrumental vs. Epistemic -- A Bardic Perspective
- 4.313 Programmatic Prediction markets
- 4.314 Cached Procrastination
- 4.315 Practical Advice Backed By Deep Theories
- 4.316 "Self-pretending" is not as useful as we think
- 4.317 Where's Your Sense of Mystery?
- 4.318 Less Meta
- 4.319 SIAI call for skilled volunteers and potential interns
- 4.320 The Craft and the Community
- 4.321 Excuse me, would you like to take a survey?
- 4.322 Should we be biased?
- 4.323 Theism, Wednesday, and Not Being Adopted
- 4.324 The End (of Sequences)
- 4.325 Final Words
- 4.326 Bayesian Cabaret
- 4.327 Verbal Overshadowing and The Art of Rationality
- 4.328 How Not to be Stupid: Starting Up
- 4.329 How Not to be Stupid: Know What You Want, What You Really Really Want
- 4.330 Epistemic vs. Instrumental Rationality: Approximations
- 4.331 What is control theory, and why do you need to know about it?
- 4.332 Re-formalizing PD
- 4.333 Generalizing From One Example
- 4.334 Wednesday depends on us.
- 4.335 How to come up with verbal probabilities
- 4.336 Fighting Akrasia: Incentivising Action
- 4.337 Fire and Motion
- 4.338 Fiction of interest
- 4.339 How Not to be Stupid: Adorable Maybes
- 4.340 Rationalistic Losing
- 4.341 Rationalist Role in the Information Age
- 4.342 Conventions and Confusing Continuity Conundrums
- 4.343 Open Thread: May 2009
- 4.344 Second London Rationalist Meeting upcoming - Sunday 14:00
- 4.345 TED Talks for Less Wrong
- 4.346 The mind-killer
- 4.347 What I Tell You Three Times Is True
- 4.348 Return of the Survey
- 4.349 Essay-Question Poll: Dietary Choices
- 4.350 Allais Hack -- Transform Your Decisions!
- 4.351 Without models
- 4.352 Bead Jar Guesses
- 4.353 Special Status Needs Special Support
- 4.354 How David Beats Goliath
- 4.355 How to use "philosophical majoritarianism"
- 4.356 Off Topic Thread: May 2009
- 4.357 Introduction Thread: May 2009
- 4.358 Consider Representative Data Sets
- 4.359 No Universal Probability Space
- 4.360 Wiki.lesswrong.com Is Live
- 4.361 Hardened Problems Make Brittle Models
- 4.362 Beware Trivial Inconveniences
- 4.363 On the Fence? Major in CS
- 4.364 Rationality is winning - or is it?
- 4.365 The First Koan: Drinking the Hot Iron Ball
- 4.366 Epistemic vs. Instrumental Rationality: Case of the Leaky Agent
- 4.367 Replaying History
- 4.368 Framing Consciousness
- 4.369 A Request for Open Problems
- 4.370 How Not to be Stupid: Brewing a Nice Cup of Utilitea
- 4.371 Step Back
- 4.372 You Are A Brain
- 4.373 No One Knows Stuff
- 4.374 Willpower Hax #487: Execute by Default
- 4.375 Rationality in the Media: Don't (New Yorker, May 2009)
- 4.376 Survey Results
- 4.377 A Parable On Obsolete Ideologies
- 4.378 "Open-Mindedness" - the video
- 4.379 Religion, Mystery, and Warm, Soft Fuzzies
- 4.380 Cheerios: An "Untested New Drug"
- 4.381 Essay-Question Poll: Voting
- 4.382 Outward Change Drives Inward Change
- 4.383 Share Your Anti-Akrasia Tricks
- 4.384 Wanting to Want
- 4.385 "What Is Wrong With Our Thoughts"
- 4.386 Bad reasons for a rationalist to lose
- 4.387 Supernatural Math
- 4.388 Rationality quotes - May 2009
- 4.389 Positive Bias Test (C++ program)
- 4.390 Catchy Fallacy Name Fallacy (and Supporting Disagreement)
- 4.391 Inhibition and the Mind
- 4.392 Least Signaling Activities?
- 4.393 Brute-force Music Composition
- 4.394 Changing accepted public opinion and Skynet
- 4.395 Homogeneity vs. heterogeneity (or, What kind of sex is most moral?)
- 4.396 Saturation, Distillation, Improvisation: A Story About Procedural Knowledge And Cookies
- 4.397 This Failing Earth
- 4.398 The Wire versus Evolutionary Psychology
- 4.399 Dissenting Views
- 4.400 Eric Drexler on Learning About Everything
- 4.401 Anime Explains the Epimenides Paradox
- 4.402 Do Fandoms Need Awfulness?
- 4.403 Can we create a function that provably predicts the optimization power of intelligences?
- 4.404 Link: The Case for Working With Your Hands
- 4.405 Image vs. Impact: Can public commitment be counterproductive for achievement?
- 4.406 A social norm against unjustified opinions?
- 4.407 Taking Occam Seriously
- 4.408 The Onion Goes Inside The Biased Mind
- 4.409 The Frontal Syndrome
- 4.410 Open Thread: June 2009
- 4.411 Concrete vs Contextual values
- 4.412 Bioconservative and biomoderate singularitarian positions
- 4.413 Would You Slap Your Father? Article Linkage and Discussion
- 4.414 With whom shall I diavlog?
- 4.415 Mate selection for the men here
- 4.416 Third London Rationalist Meeting
- 4.417 Post Your Utility Function
- 4.418 Probability distributions and writing style
- 4.419 My concerns about the term 'rationalist'
- 4.420 Honesty: Beyond Internal Truth
- 4.421 Macroeconomics, The Lucas Critique, Microfoundations, and Modeling in General
- 4.422 indexical uncertainty and the Axiom of Independence
- 4.423 London Rationalist Meetups bikeshed painting thread
- 4.424 The Aumann's agreement theorem game (guess 2/3 of the average)
- 4.425 Expected futility for humans
- 4.426 You can't believe in Bayes
- 4.427 Less wrong economic policy
- 4.428 The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It
- 4.429 Let's reimplement EURISKO!
- 4.430 If it looks like utility maximizer and quacks like utility maximizer...
- 4.431 Typical Mind and Politics
- 4.432 Why safety is not safe
- 4.433 Rationality Quotes - June 2009
- 4.434 Readiness Heuristics
- 4.435 The two meanings of mathematical terms
- 4.436 The Laws of Magic
- 4.437 Intelligence enhancement as existential risk mitigation
- 4.438 Rationalists lose when others choose
- 4.439 Ask LessWrong: Human cognitive enhancement now?
- 4.440 Don't Count Your Chickens...
- 4.441 Applied Picoeconomics
- 4.442 Representative democracy awesomeness hypothesis
- 4.443 The Physiology of Willpower
- 4.444 Time to See If We Can Apply Anything We Have Learned
- 4.445 Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation
- 4.446 ESR's comments on some EY:OB/LW posts
- 4.447 Nonparametric Ethics
- 4.448 Shane Legg on prospect theory and computational finance
- 4.449 The Domain of Your Utility Function
- 4.450 The Monty Maul Problem
- 4.451 Guilt by Association
- 4.452 Lie to me?
- 4.453 Richard Dawkins TV - Baloney Detection Kit video
- 4.454 Coming Out
- 4.455 The Great Brain is Located Externally
- 4.456 Controlling your inner control circuits
- 4.457 What's In A Name?
- 4.458 Atheism = Untheism + Antitheism
- 4.459 Book Review: Complications
- 4.460 Open Thread: July 2009
- 4.461 Fourth London Rationalist Meeting?
- 4.462 Rationality Quotes - July 2009
- 4.463 Harnessing Your Biases
- 4.464 Avoiding Failure: Fallacy Finding
- 4.465 Not Technically Lying
- 4.466 The enemy within
- 4.467 Media bias
- 4.468 Can chess be a game of luck?
- 4.469 The Dangers of Partial Knowledge of the Way: Failing in School
- 4.470 An interesting speed dating study
- 4.471 Can self-help be bad for you?
- 4.472 Causality does not imply correlation
- 4.473 Recommended reading for new rationalists
- 4.474 Formalized math: dream vs reality
- 4.475 Causation as Bias (sort of)
- 4.476 Debate: Is short term planning in humans due to a short life or due to bias?
- 4.477 Jul 12 Bay Area meetup - Hanson, Vassar, Yudkowsky
- 4.478 Our society lacks good self-preservation mechanisms
- 4.479 Good Quality Heuristics
- 4.480 How likely is a failure of nuclear deterrence?
- 4.481 The Strangest Thing An AI Could Tell You
- 4.482 "Sex Is Always Well Worth Its Two-Fold Cost"
- 4.483 The Dirt on Depression
- 4.484 Fair Division of Black-Hole Negentropy: an Introduction to Cooperative Game Theory
- 4.485 Absolute denial for atheists
- 4.486 Causes of disagreements
- 4.487 The Popularization Bias
- 4.488 Zwicky's Trifecta of Illusions
- 4.489 Are You Anosognosic?
- 4.490 Article upvoting
- 4.491 Sayeth the Girl
- 4.492 Timeless Decision Theory: Problems I Can't Solve
- 4.493 An Akrasia Anecdote
- 4.494 Being saner about gender and rationality
- 4.495 Are you crazy?
- 4.496 Counterfactual Mugging v. Subjective Probability
- 4.497 Creating The Simple Math of Everything
- 4.498 Joint Distributions and the Slow Spread of Good Ideas
- 4.499 Chomsky, Sports Talk Radio, Media Bias, and Me
- 4.500 Outside Analysis and Blind Spots
- 4.501 Shut Up And Guess
- 4.502 Of Exclusionary Speech and Gender Politics
- 4.503 Missing the Trees for the Forest
- 4.504 Deciding on our rationality focus
- 4.505 Fairness and Geometry
- 4.506 It's all in your head-land
- 4.507 An observation on cryocrastination
- 4.508 The Price of Integrity
- 4.509 Are calibration and rational decisions mutually exclusive? (Part one)
- 4.510 The Nature of Offense
- 4.511 AndrewH's observation and opportunity costs
- 4.512 Are calibration and rational decisions mutually exclusive? (Part two)
- 4.513 Celebrate Trivial Impetuses
- 4.514 Freaky Fairness
- 4.515 Link: Interview with Vladimir Vapnik
- 4.516 Five Stages of Idolatry
- 4.517 Bayesian Flame
- 4.518 The Second Best
- 4.519 Bayesian Utility: Representing Preference by Probability Measures
- 4.520 The Trolley Problem in popular culture: Torchwood Series 3
- 4.521 Thomas C. Schelling's "Strategy of Conflict"
- 4.522 Information cascades in scientific practice
- 4.523 The Obesity Myth
- 4.524 The Hero With A Thousand Chances
- 4.525 Pract: A Guessing and Testing Game
- 4.526 An Alternative Approach to AI Cooperation
- 4.527 Open Thread: August 2009
- 4.528 Pain
- 4.529 Suffering
- 4.530 Why You're Stuck in a Narrative
- 4.531 Unspeakable Morality
- 4.532 The Difficulties of Potential People and Decision Making
- 4.533 Wits and Wagers
- 4.534 The usefulness of correlations
- 4.535 She Blinded Me With Science
- 4.536 The Machine Learning Personality Test
- 4.537 A Normative Rule for Decision-Changing Metrics
- 4.538 Recommended reading: George Orwell on knowledge from authority
- 4.539 Rationality Quotes - August 2009
- 4.540 Why Real Men Wear Pink
- 4.541 The Objective Bayesian Programme
- 4.542 LW/OB Rationality Quotes - August 2009
- 4.543 Exterminating life is rational
- 4.544 Robin Hanson's lists of Overcoming Bias Posts
- 4.545 Fighting Akrasia: Finding the Source
- 4.546 A note on hypotheticals
- 4.547 Dreams with Damaged Priors
- 4.548 Would Your Real Preferences Please Stand Up?
- 4.549 Calibration fail
- 4.550 Guess Again
- 4.551 Misleading the witness
- 4.552 Utilons vs. Hedons
- 4.553 Deleting paradoxes with fuzzy logic
- 4.554 Sense, Denotation and Semantics
- 4.555 Towards a New Decision Theory
- 4.556 Fighting Akrasia: Survey Design Help Request
- 4.557 Minds that make optimal use of small amounts of sensory data
- 4.558 Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds
- 4.559 Oh my God! It's full of Nash equilibria!
- 4.560 Happiness is a Heuristic
- 4.561 Experiential Pica
- 4.562 Friendlier AI through politics
- 4.563 Singularity Summit 2009 (quick post)
- 4.564 Scott Aaronson's "On Self-Delusion and Bounded Rationality"
- 4.565 Ingredients of Timeless Decision Theory
- 4.566 You have just been Counterfactually Mugged!
- 4.567 Evolved Bayesians will be biased
- 4.568 How inevitable was modern human civilization - data
- 4.569 Timeless Decision Theory and Meta-Circular Decision Theory
- 4.570 ESR's New Take on Qualia
- 4.571 The Journal of (Failed) Replication Studies
- 4.572 Working Mantras
- 4.573 Decision theory: An outline of some upcoming posts
- 4.574 How does an infovore manage information overload?
- 4.575 Confusion about Newcomb is confusion about counterfactuals
- 4.576 Mathematical simplicity bias and exponential functions
- 4.577 A Rationalist's Bookshelf: The Mind's I (Douglas Hofstadter and Daniel Dennett, 1981)
- 4.578 Pittsburgh Meetup: Survey of Interest
- 4.579 Paper: Testing ecological models
- 4.580 The Twin Webs of Knowledge
- 4.581 Don't be Pathologically Mugged!
- 4.582 Some counterevidence for human sociobiology
- 4.583 Cookies vs Existential Risk
- 4.584 Argument Maps Improve Critical Thinking
- 4.585 Great post on Reddit about accepting atheism
- 4.586 Optimal Strategies for Reducing Existential Risk
- 4.587 Open Thread: September 2009
- 4.588 Rationality Quotes - September 2009
- 4.589 LW/OB Quotes - Fall 2009
- 4.590 Knowing What You Know
- 4.591 Decision theory: Why we need to reduce "could", "would", "should"
- 4.592 The Featherless Biped
- 4.593 The Sword of Good
- 4.594 Torture vs. Dust vs. the Presumptuous Philosopher: Anthropic Reasoning in UDT
- 4.595 Notes on utility function experiment
- 4.596 Counterfactual Mugging and Logical Uncertainty
- 4.597 Bay Area OBLW Meetup Sep 12
- 4.598 Decision theory: Why Pearl helps reduce "could" and "would", but still leaves us with at least three alternatives
- 4.599 Forcing Anthropics: Boltzmann Brains
- 4.600 Why I'm Staying On Bloggingheads.tv
- 4.601 An idea: Sticking Point Learning
- 4.602 FHI postdoc at Oxford
- 4.603 Outlawing Anthropics: An Updateless Dilemma
- 4.604 Let Them Debate College Students
- 4.605 Pittsburgh Meetup: Saturday 9/12, 6:30PM, CMU
- 4.606 The Lifespan Dilemma
- 4.607 Formalizing informal logic
- 4.608 Timeless Identity Crisis
- 4.609 The New Nostradamus
- 4.610 Formalizing reflective inconsistency
- 4.611 Beware of WEIRD psychological samples
- 4.612 The Absent-Minded Driver
- 4.613 What is the Singularity Summit?
- 4.614 Sociosexual Orientation Inventory, or failing to perform basic sanity checks
- 4.615 Quantum Russian Roulette
- 4.616 MWI, weird quantum experiments and future-directed continuity of conscious experience
- 4.617 Minneapolis Meetup: Survey of interest
- 4.618 Hypothetical Paradoxes
- 4.619 Reason as memetic immune disorder
- 4.620 How to use SMILE to solve Bayes Nets
- 4.621 The Finale of the Ultimate Meta Mega Crossover
- 4.622 Ethics as a black box function
- 4.623 Avoiding doomsday: a "proof" of the self-indication assumption
- 4.624 Anthropic reasoning and correlated decision making
- 4.625 Boredom vs. Scope Insensitivity
- 4.626 Minneapolis Meetup, This Saturday (26th) at 3:00 PM, University of Minnesota
- 4.627 The utility curve of the human population
- 4.628 Solutions to Political Problems As Counterfactuals
- 4.629 Non-Malthusian Scenarios
- 4.630 Correlated decision making: a complete theory
- 4.631 The Scylla of Error and the Charybdis of Paralysis
- 4.632 The Anthropic Trilemma
- 4.633 Your Most Valuable Skill
- 4.634 Privileging the Hypothesis
- 4.635 Why Many-Worlds Is Not The Rationally Favored Interpretation
- 4.636 Intuitive differences: when to agree to disagree
- 4.637 NY-area OB/LW meetup Saturday 10/3 7 PM
- 4.638 Regular NYC Meetups
- 4.639 Open Thread: October 2009
- 4.640 Why Don't We Apply What We Know About Twins to Everybody Else?
- 4.641 Are you a Democrat singletonian, or a Republican singletonian?
- 4.642 Scott Aaronson on Born Probabilities
- 4.643 'oy, girls on lw, want to get together some time?'
- 4.644 When Willpower Attacks
- 4.645 Dying Outside
- 4.646 Don't Think Too Hard.
- 4.647 The Presumptuous Philosopher's Presumptuous Friend
- 4.648 The First Step is to Admit That You Have a Problem
- 4.649 Let them eat cake: Interpersonal Problems vs Tasks
- 4.650 New Haven/Yale Less Wrong Meetup: 5 pm, Monday October 12
- 4.651 Boston Area Less Wrong Meetup: 2 pm Sunday October 11th
- 4.652 LW Meetup Google Calendar
- 4.653 I'm Not Saying People Are Stupid
- 4.654 How to get that Friendly Singularity: a minority view
- 4.655 The Argument from Witness Testimony
- 4.656 Link: PRISMs, Gom Jabbars, and Consciousness (Peter Watts)
- 4.657 What Program Are You?
- 4.658 Do the 'unlucky' systematically underestimate high-variance strategies?
- 4.659 Anticipation vs. Faith: At What Cost Rationality?
- 4.660 The power of information?
- 4.661 Quantifying ethicality of human actions
- 4.662 BHTV: Eliezer Yudkowsky and Andrew Gelman
- 4.663 We're in danger. I must tell the others...
- 4.664 PredictionBook.com - Track your calibration
- 4.665 The Shadow Question
- 4.666 Information theory and FOOM
- 4.667 Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 '09!
- 4.668 How to think like a quantum monadologist
- 4.669 Localized theories and conditional complexity
- 4.670 Applying Double Standards to "Divisive" Ideas
- 4.671 Near and far skills
- 4.672 Shortness is now a treatable condition
- 4.673 Lore Sjoberg's Life-Hacking FAQK
- 4.674 Why the beliefs/values dichotomy?
- 4.675 Biking Beyond Madness (link)
- 4.676 Rationality Quotes: October 2009
- 4.677 The continued misuse of the Prisoner's Dilemma
- 4.678 Better thinking through experiential games
- 4.679 Extreme risks: when not to use expected utility
- 4.680 Pound of Feathers, Pound of Gold
- 4.681 Arrow's Theorem is a Lie
- 4.682 The Value of Nature and Old Books
- 4.683 Circular Altruism vs. Personal Preference
- 4.684 Computer bugs and evolution
- 4.685 Doing your good deed for the day
- 4.686 Expected utility without the independence axiom
- 4.687 Post retracted: If you follow expected utility, expect to be money-pumped
- 4.688 A Less Wrong Q&A with Eliezer (Step 1: The Proposition)
- 4.689 David Deutsch: A new way to explain explanation
- 4.690 Less Wrong / Overcoming Bias meet-up groups
- 4.691 Our House, My Rules
- 4.692 Open Thread: November 2009
- 4.693 Re-understanding Robin Hanson's "Pre-Rationality"
- 4.694 Rolf Nelson's "The Rational Entrepreneur"
- 4.695 Money pumping: the axiomatic approach
- 4.696 Light Arts
- 4.697 News: Improbable Coincidence Slows LHC Repairs
- 4.698 Bay area LW meet-up
- 4.699 All hail the Lisbon Treaty! Or is that "hate"? Or just "huh"?
- 4.700 Hamster in Tutu Shuts Down Large Hadron Collider
- 4.701 The Danger of Stories
- 4.702 Practical rationality in surveys
- 4.703 Reflections on Pre-Rationality
- 4.704 Rationality advice from Terry Tao
- 4.705 Restraint Bias
- 4.706 What makes you YOU? For non-deists only.
- 4.707 Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions
- 4.708 Test Your Calibration!
- 4.709 Anti-Akrasia Technique: Structured Procrastination
- 4.710 Boston meetup Nov 15 (and others)
- 4.711 Consequences of arbitrage: expected cash
- 4.712 Auckland meet up Saturday Nov 28th
- 4.713 The Academic Epistemology Cross Section: Who Cares More About Status?
- 4.714 BHTV: Yudkowsky / Robert Greene
- 4.715 Why (and why not) Bayesian Updating?
- 4.716 Efficient prestige hypothesis
- 4.717 A Less Wrong singularity article?
- 4.718 Request For Article: Many-Worlds Quantum Computing
- 4.719 The One That Isn't There
- 4.720 Calibration for continuous quantities
- 4.721 Friedman on Utility
- 4.722 Rational lies
- 4.723 In conclusion: in the land beyond money pumps lie extreme events
- 4.724 How to test your mental performance at the moment?
- 4.725 Agree, Retort, or Ignore? A Post From the Future
- 4.726 Contrarianism and reference class forecasting
- 4.727 Getting Feedback by Restricting Content
- 4.728 Rooting Hard for Overpriced M&Ms
- 4.729 A Nightmare for Eliezer
- 4.730 Rationality Quotes November 2009
- 4.731 Morality and International Humanitarian Law
- 4.732 Action vs. inaction
- 4.733 The Moral Status of Independent Identical Copies
- 4.734 Call for new SIAI Visiting Fellows, on a rolling basis
- 4.735 Open Thread: December 2009
- 4.736 The Difference Between Utility and Utility
- 4.737 11 core rationalist skills
- 4.738 Help Roko become a better rationalist!
- 4.739 Intuitive supergoal uncertainty
- 4.740 Frequentist Statistics are Frequently Subjective
- 4.741 Arbitrage of prediction markets
- 4.742 Parapsychology: the control group for science
- 4.743 Science - Idealistic Versus Signaling
- 4.744 You Be the Jury: Survey on a Current Event
- 4.745 Probability Space & Aumann Agreement
- 4.746 What Are Probabilities, Anyway?
- 4.747 The persuasive power of false confessions
- 4.748 A question of rationality
- 4.749 The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom
- 4.750 Against picking up pennies
- 4.751 Previous Post Revised
- 4.752 Man-with-a-hammer syndrome
- 4.753 Rebasing Ethics
- 4.754 Getting Over Dust Theory
- 4.755 Philadelphia LessWrong Meetup, December 16th
- 4.756 An account of what I believe to be inconsistent behavior on the part of our editor
- 4.757 December 2009 Meta Thread
- 4.758 Reacting to Inadequate Data
- 4.759 The Contrarian Status Catch-22
- 4.760 Any sufficiently advanced wisdom is indistinguishable from bullshit
- 4.761 Fundamentally Flawed, or Fast and Frugal?
- 4.762 Sufficiently Advanced Sanity
- 4.763 Mandating Information Disclosure vs. Banning Deceptive Contract Terms
- 4.764 If reason told you to jump off a cliff, would you do it?
- 4.765 The Correct Contrarian Cluster
- 4.766 Karma Changes
- 4.767 lessmeta
- 4.768 The 9/11 Meta-Truther Conspiracy Theory
- 4.769 Two Truths and a Lie
- 4.770 On the Power of Intelligence and Rationality
- 4.771 Are these cognitive biases, biases?
- 4.772 Positive-affect-day-Schelling-point-mas Meetup
- 4.773 Playing the Meta-game
- 4.774 A Master-Slave Model of Human Preferences
- 4.775 That other kind of status
- 4.776 Singularity Institute $100K Challenge Grant / 2009 Donations Reminder
- 4.777 Boksops -- Ancient Superintelligence?
- 4.778 New Yearâ€™s Resolutions Thread
- 4.779 New Year's Predictions Thread
- 4.780 End of 2009 articles
Rationality is a technique to be trained.
Rationality is the martial art of the mind, building on universally human machinery. But developing rationality is more difficult than developing physical martial arts. One reason is because rationality skill is harder to verify. In recent decades, scientific fields like heuristics and biases, Bayesian probability theory, evolutionary psychology, and social psychology have given us a theoretical body of work on which to build the martial art of rationality. It remains to develop and especially to communicate techniques that apply this theoretical work introspectively to our own minds.
Basic introduction of the metaphor and some of its consequences.
Truth can be instrumentally useful and intrinsically satisfying.
Why should we seek truth? Pure curiosity is an emotion, but not therefore irrational. Instrumental value is another reason, with the advantage of giving an outside verification criterion. A third reason is conceiving of truth as a moral duty, but this might invite moralizing about "proper" modes of thinking that don't work. Still, we need to figure out how to think properly. That means avoiding biases, for which see the next post.
You have an instrumental motive to care about the truth of your beliefs about anything you care about.
Biases are obstacles to truth seeking caused by one's own mental machinery.
There are many more ways to miss than to find the truth. Finding the truth is the point of avoiding the things we call "biases", which form one of the clusters of obstacles that we find: biases are those obstacles to truth-finding that arise from the structure of the human mind, rather than from insufficient information or computing power, from brain damage, or from bad learned habits or beliefs. But ultimately, what we call a "bias" doesn't matter.
Use humility to justify further action, not as an excuse for laziness and ignorance.
There are good and bad kinds of humility. Proper humility is not being selectively underconfident about uncomfortable truths. Proper humility is not the same as social modesty, which can be an excuse for not even trying to be right. Proper scientific humility means not just acknowledging one's uncertainty with words, but taking specific actions to plan for the case that one is wrong.
Factor in what other people think, but not symmetrically, if they are not epistemic peers.
The Modesty Argument states that any two honest Bayesian reasoners who disagree should each take the other's beliefs into account and both arrive at a probability distribution that is the average of the ones they started with. Robin Hanson seems to accept the argument but Eliezer does not. Eliezer gives the example of himself disagreeing with a creationist as evidence for how following the modesty argument could lead to decreased individual rationality. He also accuses those who agree with the argument of not taking it into account when planning their actions.
You can pragmatically say "I don't know", but you rationally should have a probability distribution.
An edited instant messaging conversation regarding the use of the phrase "I don't know". "I don't know" is a useful phrase if you want to avoid getting in trouble or convey the fact that you don't have access to privileged information.
People respond in different ways to clear evidence they're wrong, not always by updating and moving on.
A story about an underground society divided into two factions: one that believes that the sky is blue and one that believes the sky is green. At the end of the story, the reactions of various citizens to discovering the outside world and finally seeing the color of the sky are described.
Publications in peer-reviewed scientific journals are more worthy of trust than what you detect with your own ears and eyes.
Those who understand the map/territory distinction will integrate their knowledge, as they see the evidence that reality is a single unified process.
Written regarding the proverb "Outside the laboratory, scientists are no wiser than anyone else." The case is made that if this proverb is in fact true, that's quite worrisome because it implies that scientists are blindly following scientific rituals without understanding why. In particular, it is argued that if a scientist is religious, e probably doesn't understand the foundations of science very well.
In your discussions, beware, for people have great difficulty being rational about current political issues. This is no surprise to someone familiar with evolutionary psychology.
People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, food, or let you kill your hated rival. If you must talk about politics (for the purpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it, but don't blame it explicitly on the whole Republican/Democratic/Liberal/Conservative/Nationalist Party.
Admit it when the evidence goes against you, or else things can get a whole lot worse.
Casey Serin owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states. The sad part is that he hasn't given up - he hasn't declared bankruptcy, and has just attempted to purchase another house. While this behavior seems merely stupid, it brings to mind Merton and Scholes of Long-Term Capital Management, who made 40% profits for three years, and then lost it all when they overleveraged. Each profession has rules on how to be successful, which makes rationality seem unlikely to help greatly in life. Yet it seems that one of the greater skills is not being stupid, which rationality does help with.
Interviewees represent a selection bias on the pool skewed toward those who are not successful or happy in their current jobs.
Software companies may see themselves as being very selective about who they hire. Out of 200 applicants, they may hire just one or two. However, that doesn't necessarily mean that they're hiring the top 1%. The programmers who weren't hired are likely to apply for jobs somewhere else. Overall, the worst programmers will apply for many more jobs over the course of their careers than the best. So programmers who are applying for a particular job are not representative of programmers as a whole. This phenomenon probably shows up in other places as well.
Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.
Robin Hanson proposed a "banned products shop" where things that the government ordinarily would ban are sold. Eliezer responded that this would probably cause at least one stupid and innocent person to die. He became surprised when people inferred from this remark that he was against Robin's idea. Policy questions are complex actions with many consequences. Thus they should only rarely appear one-sided to an objective observer. A person's intelligence is largely a product of circumstances they cannot control. Eliezer argues for cost-benefit analysis instead of traditional libertarian ideas of tough-mindedness (people who do stupid things deserve their consequences).
Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.
Just because your ethics require an action doesn't mean the universe will exempt you from the consequences. Manufactured cars kill an estimated 1.2 million people per year worldwide. (Roughly 2% of the annual planetary death rate.) Not everyone who dies in an automobile accident is someone who decided to drive a car. The tally of casualties includes pedestrians. It includes minor children who had to be pushed screaming into the car on the way to school. And yet we still manufacture automobiles, because, well, we're in a hurry. The point is that the consequences don't change no matter how good the ethical justification sounds.
People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.
In non-binary answer spaces, you can't add up pro and con arguments along one dimension without risk of getting important factual questions wrong.
Both sides are often right in describing the terrible things that will happen if we take the other side's advice; the universe is "unfair", terrible things are going to happen regardless of what we do, and it's our job to trade off for the least bad outcome.
In a rationalist community, it should not be necessary to talk in the usual circumlocutions when talking about empirical predictions. We should know that people think of arguments as soldiers and recognize the behavior in our selves. When you think about all the truth values around you come to see that much of what the Greens said about the downside of the Blue policy was true - that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it. And imagine that most of what the Blues said about the downside of the Green policy was also true - that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.
As a side effect of evolution, superstimuli exist, and, as a result of economics, they are getting and will likely continue to get worse.
At least 3 people have died by playing online games non-stop. How is it that a game is so enticing that after 57 straight hours playing, a person would rather spend the next hour playing the game over sleeping or eating? A candy bar is superstimulus, it corresponds overwhelmingly well to the EEA healthy food characteristics of sugar and fat. If people enjoy these things, the market will respond to provide as much of it as possible, even if other considerations make it undesirable.
Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having them there, maybe we ought to tax those people.
Eliezer complains about a disclaimer he had to sign before getting toe surgery because it didn't give numerical probabilities for the possible negative outcomes it described. He guesses this is because of people afflicted with "innumeracy" who would over-interpret small numbers. He proposes a tax wherein folks are asked if they are innumerate and asked to pay in proportion to their innumeracy. This tax is revealed in the comments to be a state-sponsored lottery.
Consider the thought experiment where you communicate general thinking patterns which will lead to right answers, as opposed to pre-hashed content...
Imagine that Archimedes of Syracuse invented a device that allows you to talk to him. Imagine the possibilities for improving history! Unfortunately, the device will not literally transmit your words - it transmits cognitive strategies. If you advise giving women the vote, it comes out as advising finding a wise tyrant, the Greek ideal of political discourse. Under such restrictions, what do you say to Archimedes?
If you want to really benefit humanity, do some original thinking, especially about areas of application, and directions of effort.
The point of the chronophone dilemma is to make us think about what kind of cognitive policies are good to follow when you don't know your destination in advance.
It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites". Instead they are suffering from "akrasia" or weakness of will. At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.
If part of a person--for example, the verbal module--says it wants to become more rational, we can ally with that part even when weakness of will makes the person's actions otherwise; hypocrisy need not be assumed.
Don't be satisfied knowing you are biased; instead, aspire to become stronger, studying your flaws so as to remove them. There is a temptation to take pride in confessions, which can impede progress.
There may be evolutionary psychological factors that encourage modesty and mediocrity, at least in appearance; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.
There are two types of error, systematic error, and random variance error; by repeating experiments you can average out and drive down the variance error.
If you know an estimator has high variance, you can intentionally introduce bias by choosing a simpler hypothesis, and thereby lower expected variance while raising expected bias; sometimes total error is lower, hence the "bias-variance tradeoff". Keep in mind that while statistical bias might be useful, cognitive biases are not.
Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing square error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.
Mean squared error drops when we average our predictions, but only because it uses a convex loss function. If you faced a concave loss function, you wouldn't isolate yourself from others, which casts doubt on the relevance of Jensen's inequality for rational communication. The process of sharing thoughts and arguing differences is not like taking averages.
Anything worse than the majority opinion should get selected out, so the majority opinion is rarely strictly superior to existing alternatives.
Learning common biases won't help you obtain truth if you only use this knowledge to attack beliefs you don't like. Discussions about biases need to first do no harm by emphasizing motivated cognition, the sophistication effect, and dysrationalia, although even knowledge of these can backfire.
Not being stupid seems like a more easily generalizable skill than breakthrough success. If debiasing is mostly about not being stupid, its benefits are hidden: lottery tickets not bought, blind alleys not followed, cults not joined. Hence, checking whether debiasing works is difficult, especially in the absence of organizations or systematized training.
Inductive bias is a systematic direction in belief revisions. The same observations could be evidence for or against a belief, depending on your prior. Inductive biases are more or less correct depending on how well they correspond with reality, so "bias" might not be the best description.
This is an obsolete "meta" post.
The Friedman Unit is named after Thomas Friedman who called "the next six months" the critical period in Iraq eight times between 2003 and 2007. This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.
After a point, labeling a problem as "important" is a commons problem. Rather than increasing the total resources devoted to important problems, resources are taken from other projects. Some grants proposals need to be written, but eventually this process becomes zero- or negative-sum on the margin.
A prior is an assignment of a probability to every possible sequence of observations. In principle, the prior determines a probability for any event. Formally, the prior is a giant look-up table, which no Bayesian reasoner would literally implement. Nonetheless, the formal definition is sometimes convenient. For example, uncertainty about priors can be captured with a weighted sum of priors.
Some defend lottery-ticket buying as a rational purchase of fantasy. But you are occupying your valuable brain with a fantasy whose probability is nearly zero, wasting emotional energy. Without the lottery, people might fantasize about things that they can actually do, which might lead to thinking of ways to make the fantasy a reality. To work around a bias, you must first notice it, analyze it, and decide that it is bad. Lottery advocates are failing to complete the third step.
If the opportunity to fantasize about winning justified the lottery, then a "new improved" lottery would be even better. You would buy a nearly-zero chance to become a millionaire at any moment over the next five years. You could spend every moment imagining that you might become a millionaire at that moment.
As a human, I have a proper interest in the future of human civilization, including the human pursuit of truth. That makes your rationality my business. The danger is that we will think that we can respond to irrationality with violence. Relativism is not the way to avoid this danger. Instead, commit to using only arguments and evidence, never violence, against irrational thinking.
This post was a place for debates about the nature of morality, so that subsequent posts touching tangentially on morality would not be overwhelmed.
Examples of questions to be discussed here included: What is the difference between "is" and "ought" statements? Why do some preferences seem voluntary, while others do not? Do children believe that God can change what is moral? Is there a direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does Tarski's definition of truth extend to moral statements? If you were physically altered to prefer killing, would "killing is good" become true? If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker? What are the referents of moral claims, or are they empty of content? Are there "pure" ought-statements, or do they all have is-statements mixed into them? Are there pure aesthetic judgments or preferences?
Strong emotions can be rational. A rational belief that something good happened leads to rational happiness. But your emotions ought not to change your beliefs about events that do not depend causally on your emotions.
You can't change just one thing in the world and expect the rest to continue working as before.
In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality.
"Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."
If reality consistently surprises you, then your model needs revision. But beware those who act unsurprised by surprising data. Maybe their model was too vague to be contradicted. Maybe they haven't emotionally grasped the implications of the data. Or maybe they are trying to appear poised in front of others. Respond to surprise by revising your model, not by suppressing your surprise.
People justify Noble Lies by pointing out their benefits over doing nothing. But, if you really need these benefits, you can construct a Third Alternative for getting them. How? You have to search for one. Beware the temptation not to search or to search perfunctorily. Ask yourself, "Did I spend five minutes by the clock trying hard to think of a better alternative?"
One source of hope against death is Afterlife-ism. Some say that this justifies it as a Noble Lie. But there are better (because more plausible) Third Alternatives, including nanotech, actuarial escape velocity, cryonics, and the Singularity. If supplying hope were the real goal of the Noble Lie, advocates would prefer these alternatives. But the real goal is to excuse a fixed belief from criticism, not to supply hope.
The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.
There are no risk-free investments. Even US treasury bills would fail under a number of plausible "black swan" scenarios. Nassim Taleb's own investment strategy doesn't seem to take sufficient account of such possibilities. Risk management is always a good idea.
Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way. The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior.
People want to think that the Enemy is an innately evil mutant. But, usually, the Enemy is acting as you might in their circumstances. They think that they are the hero in their story and that their motives are just. That doesn't mean that they are right. Killing them may be the best option available. But it is still a tragedy.
This obsolete post was a place for free-form comments related to the project of the Overcoming Bias blog.
School encourages two bad habits of thought: (1) equating "knowledge" with the ability to parrot back answers that the teacher expects; and (2) assuming that authorities are perfectly reliable. The first happens because students don't have enough time to digest what they learn. The second happens especially in fields like physics because students are so often just handed the right answer.
Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian," this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" instead of "What statements do I believe?"
Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.
You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof.
A woman on a panel enthusiastically declared her belief in a pagan creation myth, flaunting its most outrageously improbable elements. This seemed weirder than "belief in belief" (she didn't act like she needed validation) or "religious profession" (she didn't try to act like she took her religion seriously). So, what was she doing? She was cheering for paganism — cheering loudly by making ridiculous claims.
When you've stopped anticipating-as-if something is true, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.
Religions used to claim authority in all domains, including biology, cosmology, and history. Only recently have religions attempted to be non-disprovable by confining themselves to ethical claims. But the ethical claims in scripture ought to be even more obviously wrong than the other claims, making the idea of non-overlapping magisteria a Big Lie.
When your theory is proved wrong, just scream "OOPS!" and admit your mistake fully. Don't just admit local errors. Don't try to protect your pride by conceding the absolute minimal patch of ground. Making small concessions means that you will make only small improvements. It is far better to make big improvements quickly. This is a lesson of Bayescraft that Traditional Rationality fails to teach.
If you are paid for post-hoc analysis, you might like theories that "explain" all possible outcomes equally well, without focusing uncertainty. But what if you don't know the outcome yet, and you need to have an explanation ready in 100 minutes? Then you want to spend most of your time on excuses for the outcomes that you anticipate most, so you still need a theory that focuses your uncertainty.
Doubt is often regarded as virtuous for the wrong reason: because it is a sign of humility and recognition of your place in the hierarchy. But from a rationalist perspective, this is not why you should doubt. The doubt, rather, should exist to annihilate itself: to confirm the reason for doubting, or to show the doubt to be baseless. When you can no longer make progress in this respect, the doubt is no longer useful to you as a rationalist.
One way to fight cached patterns of thought is to focus on precise concepts.
It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.
This post quotes a poem by Eugene Gendlin, which reads, "What is true is already so. / Owning up to it doesn't make it worse. / Not being open about it doesn't make it go away. / And because it's true, it is what is there to be interacted with. / Anything untrue isn't there to be lived. / People can stand what is true, / for they are already enduring it."
If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. Yudkowsky later realized that this wouldn't work.
A hypothesis that forbids nothing permits everything, and thus fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
If an experiment contradicts a theory, we are expected to throw out the theory, or else break the rules of Science. But this may not be the best inference. If the theory is solid, it's more likely that an experiment got something wrong than that all the confirmatory data for the theory was wrong. In that case, you should be ready to "defy the data", rejecting the experiment without coming up with a more specific problem with it; the scientific community should tolerate such defiances without social penalty, and reward those who correctly recognized the error if it fails to replicate. In no case should you try to rationalize how the theory really predicted the data after all.
Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of an observation may be strong evidence or very weak evidence of absence, but it is always evidence.
If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory.
Many people think that you must abandon a belief if you admit any counterevidence. Instead, change your belief by small increments. Acknowledge small pieces of counterevidence by shifting your belief down a little. Supporting evidence will follow if your belief is true. "Won't you lose debates if you concede any counterarguments?" Rationality is not for winning debates; it is for deciding which side to join.
It is tempting to weigh each counterargument by itself against all supporting arguments. No single counterargument can overwhelm all the supporting arguments, so you easily conclude that your theory was right. Indeed, as you win this kind of battle over and over again, you feel ever more confident in your theory. But, in fact, you are just rehearsing already-known evidence in favor of your view.
Hindsight bias makes us overestimate how well our model could have predicted a known outcome. We underestimate the cost of avoiding a known bad outcome, because we forget that many other equally severe outcomes seemed as probable at the time. Hindsight bias distorts the testing of our models by observation, making us think that our models are better than they really are.
Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.
For good social reasons, we require legal and scientific evidence to be more than just rational evidence. Hearsay is rational evidence, but as legal evidence it would invite abuse. Scientific evidence must be public and reproducible by everyone, because we want a pool of especially reliable beliefs. Thus, Science is about reproducible conditions, not the history of any one experiment.
The belief that nanotechnology is possible is based on qualitative reasoning from scientific knowledge. But such a belief is merely rational. It will not be scientific until someone constructs a nanofactory. Yet if you claim that nanomachines are impossible because they have never been seen before, you are being irrational. To think that everything that is not science is pseudoscience is a severe mistake.
People think that fake explanations use words like "magic," while real explanations use scientific words like "heat conduction." But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.
In schools, "education" often consists of having students memorize answers to specific questions (i.e., the "teacher's password"), rather than learning a predictive model that says what is and isn't likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don't do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result. SilasBarta 00:54, 13 April 2011 (UTC)
You don't understand the phrase "because of evolution" unless it constrains your anticipations. Otherwise, you are using it as attire to identify yourself with the "scientific" tribe. Similarly, it isn't scientific to reject strongly superhuman AI only because it sounds like science fiction. A scientific rejection would require a theoretical model that bounds possible intelligences. If your proud beliefs don't constrain anticipation, they are probably just passwords or attire.
It is very easy for a human being to think that a theory predicts a phenomenon, when in fact is was fitted to a phenomenon. Properly designed reasoning systems (GAIs) would be able to avoid this mistake with our knowledge of probability theory, but humans have to write down a prediction in advance in order to ensure that our reasoning about causality is correct.
There are certain words and phrases that act as "stopsigns" to thinking. They aren't actually explanations, or help to resolve the actual issue at hand, but they act as a marker saying "don't ask any questions."
The theory of vitalism was developed before the idea of biochemistry. It stated that the mysterious properties of living matter, compared to nonliving matter, was due to an "elan vital". This explanation acts as a curiosity-stopper, and leaves the phenomenon just as mysterious and inexplicable as it was before the answer was given. It feels like an explanation, though it fails to constrain anticipation.
The theory of "emergence" has become very popular, but is just a mysterious answer to a mysterious question. After learning that a property is emergent, you aren't able to make any new predictions.
Positive bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
The concept of complexity isn't meaningless, but too often people assume that adding complexity to a system they don't understand will improve it. If you don't know how to solve a problem, adding complexity won't help; better to say "I have no idea" than to say "complexity" and think you've reached an answer.
Traditional rationality (without Bayes' Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.
There are no inherently mysterious phenomena, but every phenomenon seems mysterious, right up until the moment that science explains it. It seems to us now that biology, chemistry, and astronomy are naturally the realm of science, but if we had lived through their discoveries, and watched them reduced from mysterious to mundane, we would be more reluctant to believe the next phenomenon is inherently mysterious.
It's easy not to take the lessons of history seriously; our brains aren't well-equipped to translate dry facts into experiences. But imagine living through the whole of human history - imagine watching mysteries be explained, watching civilizations rise and fall, being surprised over and over again - and you'll be less shocked by the strangeness of the next era.
Imagine trying to explain quantum physics, the internet, or any other aspect of modern society to people from 1900. Technology and culture change so quickly that our civilization would be unrecognizable to people 100 years ago; what will the world look like 100 years from now?
When you encounter something you don't understand, you have three options: to seek an explanation, knowing that that explanation will itself require an explanation; to avoid thinking about the mystery at all; or to embrace the mysteriousness of the world and worship your confusion.
Although science does have explanations for phenomena, it is not enough to simply say that "Science!" is responsible for how something works -- nor is it enough to appeal to something more specific like "electricity" or "conduction". Yet for many people, simply noting that "Science has an answer" is enough to make them no longer curious about how it works. In that respect, "Science" is no different from more blatant curiosity-stoppers like "God did it!" But you shouldn't let your interest die simply because someone else knows the answer (which is a rather strange heuristic anyway): You should only be satisfied with a predictive model, and how a given phenomenon fits into that model. SilasBarta 01:22, 13 April 2011 (UTC)
Under some circumstances, rejecting arguments on the basis of absurdity is reasonable. The absurdity heuristic can allow you to identify hypotheses that aren't worth your time. However, detailed knowledge of the underlying laws should allow you to override the absurdity heuristic. Objects fall, but helium balloons rise. The future has been consistently absurd and will likely go on being that way. When the absurdity heuristic is extended to rule out crazy-sounding things with a basis in fact, it becomes absurdity bias.
Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.
New technologies and social changes have consistently happened at a rate that would seem absurd and impossible to people only a few decades before they happen. Hindsight bias causes us to see the past as obvious and as a series of changes towards the "normalcy" of the present; availability biases make it hard for us to imagine changes greater than those we've already encountered, or the effects of multiple changes. The future will be stranger than we think.
Exposure to numbers affects guesses on estimation problems by anchoring your mind to an given estimate, even if it's wildly off base. Be aware of the effect random numbers have on your estimation ability.
If you make a mistake, don't excuse it or pat yourself on the back for thinking originally; acknowledge you made a mistake and move on. If you become invested in your own mistakes, you'll stay stuck on bad ideas.
The Radical Honesty movement requires participants to speak the truth, always, whatever they think. The more competent you grow at avoiding self-deceit, the more of a challenge this would be - but it's an interesting thing to imagine, and perhaps strive for.
Advocates for the Singularity sometimes call for outreach to artists or poets; we should move away from thinking of people as if their profession is the only thing they can contribute to humanity. Being human is what gives us a stake in the future, not being poets or mathematicians.
Words like "democracy" or "freedom" are applause lights - no one disapproves of them, so they can be used to signal conformity and hand-wave away difficult problems. If you hear people talking about the importance of "balancing risks and opportunities" or of solving problems "through a collaborative process" that aren't followed up by any specifics, then the words are applause lights, not real thoughts.
George Orwell's writings on language and totalitarianism are critical to understanding rationality. Orwell was an opponent of the use of words to obscure meaning, or to convey ideas without their emotional impact. Language should get the point across - when the effort to convey information gets lost in the effort to sound authoritative, you are acting irrationally.
It's easy to think that rationality and seeking truth is an intellectual exercise, but this ignores the lessons of history. Cognitive biases and muddled thinking allow people to hide from their own mistakes and allow evil to take root. Spreading the truth makes a real difference in defeating evil.
George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.
Eliezer explains that he is overcoming writer's block by writing one Less Wrong post a day.
We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
Planning Fallacy is a tendency to overestimate your efficiency in achieving a task. The data set you consider consists of simple cached ways in which you move about accomplishing the task, and lacks the unanticipated problems and more complex ways in which the process may unfold. As a result, the model fails to adequately describe the phenomenon, and the answer gets systematically wrong.
Nobel Laureate Daniel Kahneman recounts an incident where the inside view and the outside view of the time it would take to complete a project of his were widely different.
Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions; it requires certain general habits of thought.
When it seems like an experiment that's been cited does not provide enough support for the interpretation given, remember that Scientists are generally pretty smart. Especially if the experiment was done a long time ago, or it is described as "classic" or "famous". In that case, you should consider the possibility that there is more evidence that you haven't seen. Instead of saying "This experiment could also be interpreted in this way", ask "How did they distinguish this interpretation from ________________?"
If you want to avoid the conjunction fallacy, you must try to feel a stronger emotional impact from Occam's Razor. Each additional detail added to a claim must feel as though it is driving the probability of the claim down towards zero.
Evidence is an event connected by a chain of causes and effects to whatever it is you want to learn about. It also has to be an event that is more likely if reality is one way, than if reality is another. If a belief is not formed this way, it cannot be trusted.
Part of what makes humans different from other animals is our own ability to reason about our reasoning. Mice do not think about the cognitive algorithms that generate their belief that the cat is hunting them. Our ability to think about what sort of thought processes would lead to correct beliefs is what gave rise to Science. This ability makes our admittedly flawed minds much more powerful.
If you are considering one hypothesis out of many, or that hypothesis is more implausible than others, or you wish to know with greater confidence, you will need more evidence. Ignoring this rule will cause you to jump to a belief without enough evidence, and thus be wrong.
Albert Einstein, when asked what he would do if an experiment disproved his theory of general relativity, responded with "I would feel sorry for [the experimenter]. The theory is correct." While this may sound like arrogance, Einstein doesn't look nearly as bad from a Bayesian perspective. In order to even consider the hypothesis of general relativity in the first place, he would have needed a large amount of Bayesian evidence.
To a human, Thor feels like a simpler explanation for lightning than Maxwell's equations, but that is because we don't see the full complexity of an intelligent mind. However, if you try to write a computer program to simulate Thor and a computer program to simulate Maxwell's equations, one will be much easier to accomplish. This is how the complexity of a hypothesis is measured in the formalisms of Occam's Razor.
September 26th is Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.
The way to convince Eliezer that 2+2=3 is the same way to convince him of any proposition, give him enough evidence. If all available evidence, social, mental and physical, starts indicating that 2+2=3 then you will shortly convince Eliezer that 2+2=3 and that something is wrong with his past or recollection of the past.
If you first write at the bottom of a sheet of paper, “And therefore, the sky is green!”, it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong.
Someone tells you only the evidence that they want you to hear. Are you helpless? Forced to update your beliefs until you reach their position? No, you also have to take into account what they could have told you but didn't.
Rationality works forward from evidence to conclusions. Rationalization tries in vain to work backward from favourable conclusions to the evidence. But you cannot rationalize what is not already rational. It is as if "lying" were called "truthization".
Book recommendations by Eliezer and readers.
You can't produce a rational argument for something that isn't rational. First select the rational choice. Then the rational argument is just a list of the same evidence that convinced you.
We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.
When people doubt, they instinctively ask only the questions that have easy answers. When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most.
If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.
The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking.
Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating - as defections from cooperative norms. But viewing rationality as a social obligation gives rise to some strange ideas. The laws of rationality are mathematics, and no social maneuvering can exempt you.
The facts that philosophers call "a priori" arrived in your brain by a physical process. Thoughts are existent in the universe; they are identical to the operation of brains. The "a priori" belief generator in your brain works for a reason.
Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with.
Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
Brains are slow. They need to cache as much as they can. They store answers to questions, so that no new thought is required to answer. Answers copied from others can end up in your head without you ever examining them closely. This makes you say things that you'd never believe if you thought them through. So examine your cached thoughts! Are they true?
When asked to think creatively there's always a cached thought that you can fall into. To be truly creative you must avoid the cached thought. Think something actually new, not something that you heard was the latest innovation. Striving for novelty for novelty's sake is futile, instead you must aim to be optimal. People who strive to discover truth or to invent good designs, may in the course of time attain creativity.
One way to fight cached patterns of thought is to focus on precise concepts.
Just find ways of violating cached expectations.
To seem deep, find coherent but unusual beliefs, and concentrate on explaining them well. To be deep, you actually have to think for yourself.
The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.
Proposing solutions prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.
Medical spending and aid to Africa have no net effect (or worse). But it's heartbreaking to just say no...
Eliezer offers his congratulations to Paris Hilton, who he believed had signed up for cryonics. (It turns out that she hadn't.)
An Artificial Intelligence coded using Solomonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine?
Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
Related to contamination and the illusion of transparancy, we "anchor" on our own experience and under-adjust when trying to understand others.
Humans evolved in an environment where we almost never needed to explain long inferential chains of reasoning. This fact may account for the difficulty many people have when trying to explain complicated subjects. We only explain the last step of the argument, and not every step that must be taken from our listener's premises.
Humans greatly overestimate how much sense our explanations make. In order to explain something adequately, pretend that you're trying to explain it to someone much less informed than your target audience.
In addition to the difficulties encountered in trying to explain something so that your audience understands it, there are other problems associated in learning whether or not you have explained something properly. If you read your intended meaning into whatever your listener says in response, you may think that e understands a concept, when in fact e is simply rephrasing whatever it was you actually said.
In the modern world, unlike our ancestral environment, it is not possible for one person to know more than a tiny fraction of the world's scientific knowledge. Just because you don't understand something, you should not conclude that not one of the six billion other people on the planet understands it.
People act as though it is perfectly fine and normal for individuals to have differing levels of intelligence, but that it is absolutely horrible for one racial group to be more intelligent than another. Why should the two be considered any differently?
An obsolete post in which Eliezer queried Overcoming Bias readers to find out if they would be interested in holding in-person meetings.
When the evidence we've seen points towards a conclusion that we like or dislike, there is a temptation to stop the search for evidence prematurely, or to insist that more evidence is needed.
If you had to choose between torturing one person horribly for 50 years, or putting a single dust speck into the eyes of 3^^^3 people, what would you do?
When you find yourself considering a problem in which all visible options are uncomfortable, making a choice is difficult. Grit your teeth and choose anyways.
The day after Halloween, Eliezer made a joke related to Torture vs. Dust Specks, which he had posted just a few days ago.
We should be suspicious of our tendency to justify our decisions with arguments that did not actually factor into making said decisions. Whatever process you actually use to make your decisions is what determines your effectiveness as a rationalist.
Evolution is awesomely powerful, unbelievably stupid, incredibly slow, monomaniacally singleminded, irrevocably splintered in focus, blindly shortsighted, and itself a completely accidental process. If evolution were a god, it would not be Jehovah, but H. P. Lovecraft's Azathoth, the blind idiot god burbling chaotically at the center of everything.
...is not how amazingly well it works, but that it works at all without a mind, brain, or the ability to think abstractly - that an entirely accidental process can produce complex designs. If you talk about how amazingly well evolution works, you're missing the point.
The wonder of the first replicator was not how amazingly well it replicated, but that a first replicator could arise, at all, by pure accident, in the primordial seas of Earth. That first replicator would undoubtedly be devoured in an instant by a sophisticated modern bacterium. Likewise, the wonder of evolution itself is not how well it works, but that a brainless, accidentally occurring optimization process can work at all. If you praise evolution for being such a wonderfully intelligent Creator, you're entirely missing the wonderful thing about it.
Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
Modern evolutionary theory gives us a definite picture of evolution's capabilities. If you praise evolution one millimeter higher than this, you are not scoring points against creationists, you are just being factually inaccurate. In particular we can calculate the probability and time for advantageous genes to rise to fixation. For example, a mutation conferring a 3% advantage would have only a 6% probability of surviving, and if it did so, would take 875 generations to rise to fixation in a population of 500,000 (on average).
Tried to argue mathematically that there could be at most 25MB of meaningful information (or thereabouts) in the human genome, but computer simulations failed to bear out the mathematical argument. It does seem probably that evolution has some kind of speed limit and complexity bound - eminent evolutionary biologists seem to believe it, and in fact the Genome Project discovered only 25,000 genes in the human genome - but this particular math may not be the correct argument.
A lot of people have gotten their grasp of evolutionary theory from Stephen J. Gould, a man who committed the moral equivalent of fraud in a way that is difficult to explain. At any rate, he severely misrepresented what evolutionary biologists believe, in the course of pretending to attack certain beliefs. One needs to clear from memory, as much as possible, not just everything that Gould positively stated but everything he seemed to imply the mainstream theory believed.
A tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.
Describes a key case where some pre-1960s evolutionary biologists went wrong by anthropomorphizing evolution - in particular, Wynne-Edwards, Allee, and Brereton among others believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat. Since evolution does not usually do this sort of thing, their rationale was group selection - populations that did this would survive better. But group selection is extremely difficult to make work mathematically, and an experiment under sufficiently extreme conditions to permit group selection, had rather different results.
Many people who espouse a philosophy of selfishness aren't really selfish. If they were selfish, there are a lot more productive things to do with their time than espouse selfishness, for instance. Instead, individuals who proclaim themselves selfish do whatever it is they actually want, including altruism, but can always find some sort of self-interest rationalization for their behavior.
Many people provide fake reasons for their own moral reasoning. Religious people claim that the only reason people don't murder each other is because of God. Selfish-ists provide altruistic justifications for selfishness. Altruists provide selfish justifications for altruism. If you want to know how moral someone is, don't look at their reasons. Look at what they actually do.
Why study evolution? For one thing - it lets us see an alien optimization process up close - lets us see the real consequence of optimizing strictly for an alien optimization criterion like inclusive genetic fitness. Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to restrain their breeding to live in harmony with prey; the true result is something that humans find less aesthetic.
A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.
The human brain, and every ability for thought and emotion in it, are all adaptations selected for by evolution. Humans have the ability to feel angry for the same reason that birds have wings: ancient humans and birds with those adaptations had more kids. But, it is easy to forget that there is a distinction between the reason humans have the ability to feel anger, and the reason why a particular person was angry at a particular thing. Human brains are adaptation executors, not fitness maximizers.
Brains made of proteins can learn much faster than DNA, but DNA does seem to be more adaptable. The complexity of the evolutionary hypothesis is so enormous that no species, other than humans, is capable of thinking it, and yet DNA seems to implicitly understand it. This happens because DNA is learns through the actual consequences, but protein brains can simply imagine the consequences.
Describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.
Being a thousand shards of desire isn't always fun, but at least it's not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge - tastes that judge the blind idiot god's monomaniacal focus, and find it aesthetically unsatisfying.
Proposes a formalism for a discussion of the relationship between terminal and instrumental values. Terminal values are world states that we assign some sort of positive or negative worth to. Instrumental values are links in a chain of events that lead to desired world states.
Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool. It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to evolve to extinction.
On how evolution could be responsible for the bystander effect.
It is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.
Price's Equation describes quantitatively how the change in a average trait, in each generation, is equal to the covariance between that trait and fitness. Such covariance requires substantial variation in traits, substantial variation in fitness, and substantial correlation between the two - and then, to get large cumulative selection pressures, the correlation must have persisted over many generations with high-fidelity inheritance, continuing sources of new variation, and frequent birth of a significant fraction of the population. People think of "evolution" as something that automatically gets invoked where "reproduction" exists, but these other conditions may not be fulfilled - which is why corporations haven't evolved, and nanodevices probably won't.
It is enormously advantageous to know the basic mathematical equations at the base of a field. Understanding a few simple equations of evolutionary biology, knowing how to use Bayes' Rule, and understanding the wave equation for sound in air are not enormously difficult challenges. However, if you know them, your own capabilities are greatly enhanced.
If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs. Sounds logical, right? But this selection may actually favor the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens. Such breeding programs produce hens that must be housed in individual cages, or they will peck each other to death. Jeff Skilling of Enron fancied himself an evolution-conjurer - summoning the awesome power of evolution to work for him - and so, every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses...
If you imagine a world where people are stuck on the "artifical addition" (i.e. machine calculator) problem, the way people currently are stuck on artificial intelligence, and you saw them trying the same popular approaches taken today toward AI, it would become clear how silly they are. Contrary to popular wisdom (in that world or ours), the solution is not to "evolve" an artificial adder, or invoke the need for special physics, or build a huge database of solutions, etc. -- because all of these methods dodge the crucial task of understanding what addition involves, and instead try to dance around it. Moreover, the history of AI research shows the problems of believing assertions one cannot re-generate from one's own knowledge.
Any time you believe you've learned something, you should ask yourself, "Could I re-generate this knowledge if it were somehow deleted from my mind, and how would I do so?" If the supposed knowledge is just empty buzzwords, you will recognize that you can't, and therefore that you haven't learned anything. But if it's an actual model of reality, this method will reinforce how the knowledge is entangled with the rest of the world, enabling you to apply it to other domains, and know when you need to update those beliefs. It will have become "truly part of you", growing and changing with the rest of your knowledge.
Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.
The words and statements that we use are inherently "leaky", they do not precisely convey absolute and perfect information. Most humans have ten fingers, but if you know that someone is a human, you cannot confirm (with probability 1) that they have ten fingers. The same holds with planning and ethical advice.
There are a lot of things that humans care about. Therefore, the wishes that we make (as if to a genie) are enormously more complicated than we would intuitively suspect. In order to safely ask a powerful, intelligent being to do something for you, that being must share your entire decision criterion, or else the outcome will likely be horrible.
On noticing when you're still doing something that has become disconnected from its original purpose.
It is possible for the various steps in a complex plan to become valued in and of themselves, rather than as steps to achieve some desired goal. It is especially easy if the plan is being executed by a complex organization, where each group or individual in the organization is only evaluated by whether or not they carry out their assigned step. When this process is carried to its extreme, we get Soviet shoe factories manufacturing tiny shoes to increase their production quotas, and the No Child Left Behind Act.
It is easier to get trapped in a mistake of cognition if you have no practical purpose for your thoughts. Although pragmatic usefulness is not the same thing as truth, there is a deep connection between the two.
Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.
It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
Is there a way to exploit human biases to give the impression of largess with cheap gifts? Yes. Humans compare the value/price of an object to other similar objects. A $399 Eee PC is cheap (because other laptops are more expensive), yet a $399 PS3 is expensive (because the alternatives are less expensive). To give the impression of expense in a gift chose a cheap class of item (say, a candle) and buy the most expensive one around.
Without a metric for comparison, estimates of, e.g., what sorts of punitive damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.
Positive qualities seem to correlate with each other, whether or not they actually do.
It is better to risk your life to save 200 people than to save 3. But someone who risks their life to save 3 people is revealing a more altruistic nature than someone risking their life to save 200. And yet comic books are written about heroes who save 200 innocent schoolchildren, and not police officers saving three prostitutes.
John Perry, an extropian and a transhumanist, died when the north tower of the World Trade Center fell. He knew he was risking his existence to save other people, and he had hope that he might be able to avoid death, but he still helped them. This takes far more courage than someone who dies, expecting to be rewarded in an afterlife for their virtue.
Human beings can fall into a feedback loop around something that they hold dear. Every situation they consider, they use their great idea to explain. Because their great idea explained this situation, it now gains weight. Therefore, they should use it to explain more situations. This loop can continue, until they believe Belgium controls the US banking system, or that they can use an invisible blue spirit force to locate parking spots.
You can avoid a Happy Death Spiral by (1) splitting the Great Idea into parts (2) treating every additional detail as burdensome (3) thinking about the specifics of the causal chain instead of the good or bad feelings (4) not rehearsing evidence (5) not adding happiness from claims that "you can't prove are wrong"; but not by (6) refusing to admire anything too much (7) conducting a biased search for negative points until you feel unhappy again (8) forcibly shoving an idea into a safe box.
One of the most dangerous mistakes that a human being with human psychology can make, is to begin thinking that any argument against their favorite idea must be wrong, because it is against their favorite idea. Alternatively, they could think that any argument that supports their favorite idea must be right. This failure of reasoning has led to massive amounts of suffering and death in world history.
Describes Eliezer's motivations in the sequence leading up to his post on Fake Utility Functions.
Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.
When a cult encounters a blow to their own beliefs (a prediction fails to come true, their leader is caught in a scandal, etc) the cult will often become more fanatical. In the immediate aftermath, the cult members that leave will be the ones who were previously the voice of opposition, skepticism, and moderation. Without those members, the cult will slide further in the direction of fanaticism.
The dark mirror to the happy death spiral is the spiral of hate. When everyone looks good for attacking someone, and anyone who disagrees with any attack must be a sympathizer to the enemy, the results are usually awful. It is too dangerous for there to be anyone in the world that we would prefer to say negative things about, over saying accurate things about.
The Robbers Cave Experiment, by Sherif, Harvey, White, Hood, and Sherif (1954/1961), was designed to investigate the causes and remedies of problems between groups. Twenty-two middle school aged boys were divided into two groups and placed in a summer camp. From the first time the groups learned of each other's existence, a brutal rivalry was started. The only way the counselors managed to bring the groups together was by giving the two groups a common enemy. Any resemblance to modern politics is just your imagination.
An obsolete meta post.
Simply having a good idea at the center of a group of people is not enough to prevent that group from becoming a cult. As long as the idea's adherents are human, they will be vulnerable to the flaws in reasoning that cause cults. Simply basing a group around the idea of being rational is not enough. You have to actually put in the work to oppose the slide into cultishness.
The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Stalin also believed that 2 + 2 = 4. Stupidity or human evil do not anticorrelate with truth. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.
There are many cases in which we should take the authority of experts into account, when we decide whether or not to believe their claims. But, if there are technical arguments that are available, these can screen off the authority of experts.
The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.
There is an enormous psychological difference between believing that you absolutely, certainly, have the truth, versus trying to discover the truth. If you believe that you have the truth, and that it must be protected from heretics, torture and murder follow. Alternatively, if you believe that you are close to the truth, but perhaps not there yet, someone who disagrees with you is simply wrong, not a mortal enemy.
It is a common misconception that the Nazis wanted their eugenics program to create a new breed of supermen. In fact, they wanted to breed back to the archetypal Nordic man. They located their ideals in the past, which is a counterintuitive idea for many of us.
Ayn Rand, the leader of the Objectivists, praised reason and rationality. The group she created became a cult. Praising rationality does not provide immunity to the human trend towards cultishness.
A piece of poetry written to describe the proper attitude to take towards a mentor, or a hero.
When producing art that has some sort of political purpose behind it (like persuading people, or conveying a message), don't forget to actually make it art. It can't just be politics.
Two Koans about individuals concerned that they may have joined a cult.
Finding a blow to the hated enemy to be funny is a dangerous feeling, especially if that is the only reason why the joke is funny. Jokes should be funny on their own merits before they become deserving of laughter.
Things like the amount of effort put into a project, or the number of lines in a computer program, are positive things to maximize. But this is silly. Surely it is better to accomplish the same task with fewer lines of code.
Rationality is very different in its propositional statements from Eastern religions, like Taoism or Buddhism. But, it is sometimes easier to express ideas in rationality using the language of Zen or the Tao.
A story in which Mary tells Joseph that God made her pregnant so Joseph won't realize she's been cheating on him with the village rabbi.
The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.
A way of breaking the conformity effect in some cases.
Joining a revolution does take courage, but it is something that humans can reliably do. It is comparatively more difficult to risk death. But is is more difficult than either of these to be the first person in a rebellion. To be the only one who is saying something different. That doesn't feel like going to school in black. It feels like going to school in a clown suit.
By attempting to take a leadership role, you really have to get people's attention first. This is often harder than it seems. If what you attempt to do fails, or if people don't follow you, you risk embarrassment. Deal with it.
People often nervously ask, "This isn't a cult, is it?" when encountering a group that thinks something weird. There are many reasons why this question doesn't make sense. For one thing, if you really were a member of a cult, you would not say so. Instead, what you should do when considering whether or not to join a group, is consider the details of the group itself. Is their reasoning sound? Do they do awful things to their members?
Eliezer explains that he references transhumanism on Overcoming Bias not for the purpose of proselytization, but because it is rather impossible for him to share lessons about rationality from his personal experiences otherwise, as he happens to be highly involved in the transhumanist community.
Eliezer warns readers that he is about to make a few posts directly discussing politics.
Voters for either political party usually have more in common with each other than they do with the politicians they vote for. And yet, they support their own "team members" with fanatic devotion. Nobody is allowed to criticize their own team's politicians, without their fellow voters accusing them of treason.
The conclusions we draw from analyzing the American political system are often biased by our own previous understanding of it, which we got in elementary school. In fact, the power of voting for a particular candidate (which is not the same as the power to choose which candidates will run) is not the greatest power of the voters. Instead, voters' main ability is the threat to change which party controls the government, or, extremely rarely, to completely dethrone both political parties and replace them with a third.
Many people try to vote "strategically", by considering which candidate is more "electable". One of the most important factors in whether someone is "electable" is whether they have received attention from the media and the support of one of the two major parties. Naturally, those organizations put considerable thought into who is electable in making their decision. Ultimately, all arguments for "strategic voting" tend to fall apart. The voters themselves get so little say in who the next president is that the best we can do is just to not vote for nincompoops.
In Evolutionary Biology or Psychology, a nice-sounding but untested theory is referred to as a "just-so story", after the stories written by Rudyard Kipling. But, if there is a way to test the theory, people tend to consider it more likely to be correct. This is not a rational tendency.
Part of the reason professional evolutionary biologists dislike just-so stories is that many of them are simply wrong.
Sometimes, you calculate the probability of a certain event and find that the number is so unbelievably small that your brain really can't keep track of how small it is, any more than you can spot an individual grain of sand on a beach from 100 meters off. But, because you're already thinking about that event enough to calculate the probability of it, it feels like it's still worth keeping track of. It's not.
Nothing is perfectly black or white. Everything is gray. However, this does not mean that everything is the same shade of gray. It may be impossible to completely eliminate bias, but it is still worth reducing bias.
Those without the understanding of the Quantitative Way will often map the process of arriving at beliefs onto the social domains of Authority. They think that if Science is not infinitely certain, or if it has ever admitted a mistake, then it is no longer a trustworthy source, and can be ignored. This cultural gap is rather difficult to cross.
If you say you are 99.9999% confident of a proposition, you're saying that you could make one million equally likely statements and be wrong, on average, once. Probability 1 indicates a state of infinite certainty. Furthermore, once you assign a probability 1 to a proposition, Bayes' theorem says that it can never be changed, in response to any evidence. Probability 1 is a lot harder to get to with a human brain than you would think.
In the ordinary way of writing probabilities, 0 and 1 both seem like entirely reachable quantities. But when you transform probabilities into odds ratios, or log-odds, you realize that in order to get a proposition to probability 1 would require an infinite amount of evidence.
The joy of mathematics is inventing mathematical objects, and then noticing that the mathematical objects that you just created have all sorts of wonderful properties that you never intentionally built into them. It is like building a toaster and then realizing that your invention also, for some unexplained reason, acts as a rocket jetpack and MP3 player.
Mathematicians expect that if you dig deep enough, a stable, or even beautiful, pattern will emerge. Some people claim that this belief is unfounded. But, we have previously found order in many of the places we've looked for it.
There are three reasons why a world governed by math can still seem messy. First, we may not actually know the math. Secondly, even if we do know all of the math, we may not have enough computing power to do the full calculation. And finally, even if we did know all the math, and we could compute it, we still don't know where in the mathematical system we are living.
Bayesians expect probability theory, and rationality itself, to be math. Self consistent, neat, even beautiful. This is why Bayesians think that Cox's theorems are so important.
When you find a seeming inconsistency in the rules of math, or logic, or probability theory, you might do well to consider that math has rightfully earned a bit more credibility than that. Check the proof. It is more likely that you have made a mistake in algebra, than that you have just discovered a fatal flaw in math itself.
Offered choices between gambles, people make decision-theoretically inconsistent decisions.
Offered choices between gambles, people make decision-theoretically inconsistent decisions.
We really shouldn't care less about the future than we do about the present.
Our moral preferences shouldn't be circular. If a policy A is better than B, and B is better than C, and C is better than D, and so on, then policy A really should be better than policy Z.
Our intuitions, the underlying cognitive tricks that we use to build our thoughts, are an indispensable part of our cognition. The problem is that many of those intuitions are incoherent, or are undesirable upon reflection. But if you try to "renormalize" your intuition, you wind up with what is essentially utilitarianism.
There is a long history of people claiming to have found paradoxes in Bayesian Probability Theory. Typically, these proofs are fallacious, but correct seeming, just as apparent proofs that 2 = 1 are. But in probability theory, the illegal operation is usually not a hidden division by zero, but rather an infinity that is not arrived as a limit of a finite calculation. Once you are more careful with your math, these paradoxes typically go away.
Many people only start to grow as a rationalist when they find something that they care about more than they care about rationality itself. It takes something really scary to cause you to override your intuitions with math.
Newcomb's problem is a very famous decision theory problem in which the rational move appears to be consistently punished. This is the wrong attitude to take. Rationalists should win. If your particular ritual of cognition consistently fails to yield good results, change the ritual.
A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?
Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth.
You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal.
The mere presence of words can influence thinking, sometimes misleading it.
The mere presence of words can influence thinking, sometimes misleading it.
The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."
You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example.
You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example. "What is red?" "Red is a color." "What's a color?" "It's a property of a thing?" "What's a thing? What's a property?" It never occurs to you to point to a stop sign and an apple.
The extension doesn't match the intension. We aren't consciously aware of our identification of a red light in the sky as "Mars", which will probably happen regardless of your attempt to define "Mars" as "The God of War".
If you really think that your reasoning is superior to that of prediction markets, there is free money available to you right now. If you aren't picking it up, you clearly don't really believe that you can beat the markets.
Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does.
Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does. When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails".
You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters.
You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters. Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a disease was more likely to spread from robins to ducks on an island, than from ducks to robins.
A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions.
A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you'll get enough information that the occasional nine-fingered human won't fool you.
You ask whether something "is" or "is not" a category member but can't name the question you really want answered.
You ask whether something "is" or "is not" a category member but can't name the question you really want answered. What is a "man"? Is Barney the Baby Boy a "man"? The "correct" answer may depend considerably on whether the query you really want answered is "Would hemlock be a good thing to feed Barney?" or "Will Barney make a good husband?"
You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them.
You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them. It's much easier for a human to notice whether an object is a "blegg" or "rube"; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs. Other statistical algorithms work differently.
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain.
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept".
You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.
(see also the wiki page)
An example of how the technique helps.
You allow an argument to slide into being about definitions, even though it isn't what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a "sound", you asked the two soon-to-be arguers whether they thought a "sound" should be defined as "acoustic vibrations" or "auditory experiences", they'd probably tell you to flip a coin. Only after the argument starts does the definition of a word become politically charged.
You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept.
You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept. When someone shouts, "Yikes! A tiger!", evolution would not favor an organism that thinks, "Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribemembers associate with their internal analogues of my own tiger concept and which aiiieeee CRUNCH CRUNCH GULP." So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself. People argue about the correct meaning of a label like "sound".
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say.
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand. When you each understand what is in the other's mind, you are done.
You pull out a dictionary in the middle of an empirical or moral argument. Dictionary editors are historians of usage, not legislators of language. If the common definition contains a problem - if "Mars" is defined as the God of War, or a "dolphin" is defined as a kind of fish, or "Negroes" are defined as a separate category from humans, the dictionary will reflect the standard mistake.
You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether "atheism" is a "religion" or whatever? If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?
You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.
You use complex renamings to create the illusion of inference.
You use complex renamings to create the illusion of inference. Is a "human" defined as a "mortal featherless biped"? Then write: "All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal." Looks less impressive that way, doesn't it?
When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.
If Albert and Barry aren't allowed to use the word "sound", then Albert will have to say "A tree falling in a deserted forest generates acoustic vibrations", and Barry will say "A tree falling in a deserted forest generates no auditory experiences". When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.
Description of the technique.
The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about.
The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about. What actually goes on in schools once you stop calling it "education"? What's a degree, once you stop calling it a "degree"? If a coin lands "heads", what's its radial orientation? What is "truth", if you can't say "accurate" or "correct" or "represent" or "reflect" or "semantic" or "believe" or "knowledge" or "map" or "real" or any other simple term?
You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.
You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket. It's part of a detective's ordinary work to observe that Carol wore red last night, or that she has black hair; and it's part of a detective's ordinary work to wonder if maybe Carol dyes her hair. But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.
You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension.
You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension. In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.
You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations.
You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations. A "wiggin" is defined in the dictionary as a person with green eyes and black hair. The word "wiggin" also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn't in the dictionary. So you point to someone and say: "Green eyes? Black hair? See, told you he's a wiggin! Watch, next he's going to steal the silverware."
You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition.
You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition. You define "human" as a "featherless biped", and point to Socrates and say, "No feathers - two legs - he must be human!" But what you really care about is something else, like mortality. If what was in dispute was Socrates's number of legs, the other fellow would just reply, "Whaddaya mean, Socrates's got two legs? That's what we're arguing about in the first place!"
You claim "Ps, by definition, are Qs!" If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there's no point in arguing "Men, by definition, are mortal!" The main time you feel the need to tighten the vise by insisting that something is true "by definition" is when there's other information that calls the default inference into doubt.
You try to establish membership in an empirical cluster "by definition". You wouldn't feel the need to say, "Hinduism, by definition, is a religion!" because, well, of course Hinduism is a religion. It's not just a religion "by definition", it's, like, an actual religion. Atheism does not resemble the central members of the "religion" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion. That's why you've got to crush all opposition by pointing out that "Atheism is a religion" is true by definition, because it isn't true any other way.
Your definition draws a boundary around things that don't really belong together.
Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.
Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?
You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often. This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound "simpler". Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?
You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences.
You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences. Since green-eyed people are not more likely to have black hair, or vice versa, and they don't share any other characteristics in common, why have a word for "wiggin"?
You draw an unsimple boundary without any reason to do so.
You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious. If you don't present reasons to draw that particular boundary, trying to create an "arbitrary" word in that location is like a detective saying: "Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... but have we considered John Q. Wiffleheim as a suspect?"
If you are trying to judge whether some unpleasant idea is true you should visualise what the world would look like if it were true, and what you would do in that situation. This will allow you to be less scared of the idea, and reason about it without immediately trying to reject it.
To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort. Engines of cognition are not so different from heat engines, though they manipulate entropy in a more subtle form than burning gasoline. So unless you can tell me which specific step in your argument violates the laws of physics by giving you true knowledge of the unseen, don't expect me to believe that a big, elaborate clever argument can do it either.
People learn under the traditional school regimen that the teacher tells you certain things, and you must believe them and recite them back; but if a mere student suggests a belief, you do not have to obey it. They map the domain of belief onto the domain of authority, and think that a certain belief is like an order that must be obeyed, but a probabilistic belief is like a mere suggestion. And when half-trained or tenth-trained rationalists abandon their art and try to believe without evidence just this once, they often build vast edifices of justification, confusing themselves just enough to conceal the magical steps. It can be quite a pain to nail down where the magic occurs - their structure of argument tends to morph and squirm away as you interrogate them. But there's always some step where a tiny probability turns into a large one - where they try to believe without evidence - where they step into the unknown, thinking, "No one can prove me wrong".
If a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.
You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes.
You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes. No way am I trying to summarize this one. Just read the blog post.
Visualize a "triangular lightbulb". What did you see?
You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace. Visualize a "triangular lightbulb". What did you see?
"Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting. "Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
Contains summaries of the sequence of posts about the proper use of words.
This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
Where the mind cuts against reality's grain, it generates wrong questions - questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.
When you are faced with an unanswerable question - a question to which it seems impossible to even imagine an answer - there is a simple trick which can turn the question solvable. Instead of asking, "Why do I have free will?", try asking, "Why do I think I have free will?"
E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind's properties into the external world. the Mind Projection Fallacy generalizes as an error. It is in the argument over the real meaning of the word sound, and in the magazine cover of the monster carrying off a woman in the torn dress, and Kant's declaration that space by its very nature is flat, and Hume's definition of a priori ideas as those "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe"...
Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
It's very easy to derive extremely wrong conclusions if you don't make a clear enough distinction between your beliefs about the world, and the world itself.
Using qualitative, binary reasoning may make it easier to confuse belief and reality; if we use probability distributions, the distinction is much clearer.
We build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.
Apparently "the mere touch of cold philosophy", i.e., the truth, has destroyed haunts in the air, gnomes in the mine, and rainbows. This calls to mind a rather different bit of verse:
One of these things Is not like the others One of these things Doesn't belong
The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!
There is a very great distinction between being able to see where the rainbow comes from, and playing around with prisms to confirm it, and maybe making a rainbow yourself by spraying water droplets, versus some dour-faced philosopher just telling you, "No, there's nothing special about the rainbow. Didn't you hear? Scientists have explained it away. Just something to do with raindrops or whatever. Nothing to be excited about." I think this distinction probably accounts for a hell of a lot of the deadly existential emptiness that supposedly accompanies scientific reductionism.
Equations of physics aren't about strong emotions. They can inspire those emotions in the mind of a scientist, but the emotions are not as raw as the stories told about Jupiter (the god). And so it might seem that reducing Jupiter to a spinning ball of methane and ammonia takes away some of the poetry in those stories. But ultimately, we don't have to keep telling stories about Jupiter. It's not necessary for Jupiter to think and feel in order for us to tell stories, because we can always write stories with humans as its protagonists.
If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.
It feels incredibly good to discover the answer to a problem that nobody else has answered. And we should enjoy finding answers. But we really shouldn't base our joy on the fact that nobody else has done it before. Even if someone else knows the answer to a puzzle, if you don't know it, it's still a mystery to you. And you should still feel joy when you discover the answer.
There are several reasons why it's worth talking about joy in the merely real in a discussion on reductionism. One is to leave a line of retreat. Another is to improve your own abilities as a rationalist by learning to invest your energy in the real world, and in accomplishing things here, rather than in a fantasy.
Magic (and dragons, and UFOs, and ...) get much of their charm from the fact that they don't actually exist. If dragons did exist, people would treat them like zebras; most people wouldn't bother to pay attention, but some scientists would get oddly excited about them. If we ever create dragons, or find aliens, we will have to learn to enjoy them, even though they happen to exist.
Most of the stuff reported in Science News is false, or at the very least, misleading. Scientific controversies are topics of such incredible difficulty that even people in the field aren't sure what's true. Read elementary textbooks. Study the settled science before you try to understand the outer fringes.
A proposal for a new holiday, in which journalists report on great scientific discoveries of the past as if they had just happened, and were still shocking.
Trying to replace religion with humanism, atheism, or transhumanism doesn't work. If you try to write a hymn to the nonexistence of god, it will fail, because you are simply trying to imitate something that we don't really need to imitate. But that doesn't mean that the feeling of transcendence is something we should always avoid. After all, in a world in which religion never existed, people would still feel that same way.
Describes a few pieces of experimental evidence showing that objects or information which are believed to be in short supply are valued more than the same objects or information would be on their own.
People don't study science, in part, because they perceive it to be public knowledge. In fact, it's not; you have to study a lot before you actually understand it. But because science is thought to be freely available, people ignore it in favor of cults that conceal their secrets, even if those secrets are wrong. In fact, it might be better if scientific knowledge was hidden from anyone who didn't undergo the initiation ritual, and study as an acolyte, and wear robes, and chant, and...
Brennan is inducted into the Conspiracy
When you pick up a cup of water, is it your hand that picks it up, or is it your fingers, thumb, and palm working together? Just because something can be reduced to smaller parts doesn't mean that the original thing doesn't exist.
It is very hard, without the benefit of hindsight, to understand just how it is that these little bouncing billiard balls called atoms, could ever combine in such a way as to make something angry. If you try to imagine this problem without understanding the idea of neurons, information processing, computing, etc you realize just how challenging reductionism actually is.
For a very long time, people had a detailed understanding of kinetics, and they had a detailed understanding of heat. They understood concepts such as momentum and elastic rebounds, as well as concepts such as temperature and pressure. It took an extraordinary amount of work in order to understand things deeply enough to make us realize that heat and motion were really the same thing.
Eliezer's contribution to Amazing Breakthrough Day.
Virtually every belief you have is not about elementary particle fields, which are (as far as we know) the actual reality. This doesn't mean that those beliefs aren't true. "Snow is white" does not mention quarks anywhere, and yet snow nevertheless is white. It's a computational shortcut, but it's still true.
Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
A few more points on Zombies.
The argument against zombies can be extended into a more general anti-zombie principle. But, figuring out what that more general principle is, is more difficult than it may seem.
Fleshes out the generalized anti-zombie principle a bit more, and describes the game "follow-the-improbability".
That it's impossible even in principle to observe something sometimes isn't enough to conclude that it doesn't exist.
If a spaceship goes over the cosmological horizon relative to us, so that it can no longer communicate with us, should we believe that the spaceship instantly ceases to exist?
Quantum mechanics doesn't deserve its fearsome reputation.
Quantum mechanics doesn't deserve its fearsome reputation. If you tell people something is supposed to be mysterious, they won't understand it. It's human intuitions that are "strange" or "weird"; physics itself is perfectly normal. Talking about historical erroneous concepts like "particles" or "waves" is just asking to confuse people; present the real, unified quantum physics straight out. The series will take a strictly realist perspective - quantum equations describe something that is real and out there. Warning: Although a large faction of physicists agrees with this, it is not universally accepted. Stronger warning: I am not even going to present non-realist viewpoints until later, because I think this is a major source of confusion.
A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.
The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an experimentally testable fact that "Photon 1 here, photon 2 there" is the same configuration as "Photon 2 here, photon 1 there".
Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.
In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?
You wouldn't think that it would be possible to do an experiment that told you that two particles are completely identical - not just to the limit of experimental precision, but perfectly. You could even give a precise-sounding philosophical argument for why it was not possible - but the argument would have a deeply buried assumption. Quantum physics violates this deep assumption, making the experiment easy.
How to visualize the state of a system of two 1-dimensional particles, as a single point in 2-dimensional space. Understanding configuration spaces in classical physics is a useful first step, before trying to imagine quantum configuration spaces.
Instead of a system state being associated with a single point in a classical configuration space, the instantaneous real state of a quantum system is a complex amplitude distribution over a quantum configuration space. What creates the illusion of "individual particles", like an electron caught in a trap, is a plaid distribution - one that happens to factor into the product of two parts. It is the whole distribution that evolves when a quantum system evolves. Individual configurations don't have physics; amplitude distributions have physics. Quantum entanglement is the general case; quantum independence is the special case.
Instead of thinking that a photon takes a single straight path through space, we can regard it as taking all possible paths through space, and adding the amplitudes for every possible path. Nearly all the paths cancel out - unless we do clever quantum things, so that some paths add instead of canceling out. Then we can make light do funny tricks for us, like reflecting off a mirror in such a way that the angle of incidence doesn't equal the angle of reflection. But ordinarily, nearly all the paths except an extremely narrow band, cancel out - this is one of the keys to recovering the hallucination of classical physics.
One of the chief ways to confuse yourself while thinking about quantum mechanics, is to think as if photons were little billiard balls bouncing around. The appearance of little billiard balls is a special case of a deeper level on which there are only multiparticle configurations and amplitude flows. It is easy to set up physical situations in which there exists no fact of the matter as to which electron was originally which.
As a consequence of quantum theory, we can see that the concept of swapping out all the atoms in you with "different" atoms is physical nonsense. It's not something that corresponds to anything that could ever be done, even in principle, because the concept is so confused. You are still you, no matter "which" atoms you are made of.
A satirical script for a zombie movie, but not about the lurching and drooling kind. The philosophical kind.
Given that there's no such thing as "the same atom", whether you are "the same person" from one time to another can't possibly depend on whether you're made out of the same atoms.
A quantum system that factorizes can evolve into a system that doesn't factorize, destroying the illusion of independence. But entangling a quantum system with its environment, can appear to destroy entanglements that are already present. Entanglement with the environment can separate out the pieces of an amplitude distribution, preventing them from interacting with each other. Decoherence is fundamentally symmetric in time, but appears asymmetric because of the second law of thermodynamics.
Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum".
Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum". The evolution of the amplitude distribution over time, involves things like taking the second derivative in space and multiplying by i to get the first derivative in time. The end result of this time evolution rule is that blobs of particle-presence appear to race around in physical space. The notion of "an exact particular momentum" or "an exact particular position" is not something that can physically happen, it is a tool for analyzing amplitude distributions by taking them apart into a sum of simpler waves. This uses the assumption and fact of linearity: the evolution of the whole wavefunction seems to always be the additive sum of the evolution of its pieces. Using this tool, we can see that if you take apart the same distribution into a sum of positions and a sum of momenta, they cannot both be sharply concentrated at the same time. When you "observe" a particle's position, that is, decohere its positional distribution by making it interact with a sensor, you take its wave packet apart into two pieces; then the two pieces evolve differently. The Heisenberg Principle definitely does not say that knowing about the particle, or consciously seeing it, will make the universe behave differently.
The position basis can be computed locally in the configuration space; the momentum basis is not local. Why care about locality? Because it is a very deep principle; reality itself seems to favor it in some way.
Meet the Ebborians, who reproduce by fission. The Ebborian brain is like a thick sheet of paper that splits down its thickness. They frequently experience dividing into two minds, and can talk to their other selves. It seems that their unified theory of physics is almost finished, and can answer every question, when one Ebborian asks: When exactly does one Ebborian become two people?
It then turns out that the entire planet of Ebbore is splitting along a fourth-dimensional thickness, duplicating all the people within it. But why does the apparent chance of "ending up" in one of those worlds, equal the square of the fourth-dimensional thickness? Many mysterious answers are proposed to this question, and one non-mysterious one.
When a sensor measures a particle whose amplitude distribution stretches over space - perhaps seeing if the particle is to the left or right of some dividing line - then the standard laws of quantum mechanics call for the sensor+particle system to evolve into a state of (particle left, sensor measures LEFT) + (particle right, sensor measures RIGHT). But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").
Decoherence is implicit in quantum physics, not an extra law on top of it. Asking exactly when "one world" splits into "two worlds" may be like asking when, if you keep removing grains of sand from a pile, it stops being a "heap". Even if you're inside the world, there may not be a definite answer. This puzzle does not arise only in quantum physics; the Ebborians could face it in a classical universe, or we could build sentient flat computers that split down their thickness. Is this really a physicist's problem?
There is no exact point at which decoherence suddenly happens. All of quantum mechanics is continuous and differentiable, and decoherent processes are no exception to this.
Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
The last serious mysterious question left in quantum physics: When a quantum world splits in two, why do we seem to have a greater probability of ending up in the larger blob, exactly proportional to the integral of the squared modulus? It's an open problem, but non-mysterious answers have been proposed. Try not to go funny in the head about it.
Since quantum evolution is linear and unitary, decoherence can be seen as projecting a wavefunction onto orthogonal subspaces. This can be neatly illustrated using polarized photons and the angle of the polarized sheet that will absorb or transmit them.
Using our newly acquired understanding of photon polarizations, we see how to construct a quantum state of two photons in which, when you measure one of them, the person in the same world as you, will always find that the opposite photon has opposite quantum state. This is not because any influence is transmitted; it is just decoherence that takes place in a very symmetrical way, as can readily be observed in our calculations.
(Note: This post was designed to be read as a stand-alone, if desired.) Originally, the discoverers of quantum physics thought they had discovered an incomplete description of reality - that there was some deeper physical process they were missing, and this was why they couldn't predict exactly the results of quantum experiments. The math of Bell's Theorem is surprisingly simple, and we walk through it. Bell's Theorem rules out being able to locally predict a single, unique outcome of measurements - ruling out a way that Einstein, Podolsky, and Rosen once defined "reality". This shows how deep implicit philosophical assumptions can go. If worlds can split, so that there is no single unique outcome, then Bell's Theorem is no problem. Bell's Theorem does, however, rule out the idea that quantum physics describes our partial knowledge of a deeper physical state that could locally produce single outcomes - any such description will be inconsistent.
As Einstein argued long ago, the quantum physics of his era - that is, the single-global-world interpretation of quantum physics, in which experiments have single unique random results - violates Special Relativity; it imposes a preferred space of simultaneity and requires a mysterious influence to be transmitted faster than light; which mysterious influence can never be used to transmit any useful information. Getting rid of the single global world dispels this mystery and puts everything back to normal again.
The idea that decoherence fails the test of Occam's Razor is wrong as probability theory.
(Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
"Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually is a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"
Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.
If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.
Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds wins outright given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.
A short story set in the same world as "Initiation Ceremony". Future physics students look back on the cautionary tale of quantum physics.
The failure of first-half-of-20th-century-physics was not due to straying from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.
The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.
If you have an idea, Science tells you to test it experimentally. If you spend 10 years testing the idea and the result comes out negative, Science slaps you on the back and says, "Better luck next time." If you want to spend 10 years testing a hypothesis that will actually turn out to be right, you'll have to try to do the thing that Science doesn't trust you to do: think rationally, and figure out the answer before you get clubbed over the head with it.
Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.
No. Maybe someday it will be part of standard scientific training, but for now, it's not, and the absence is visible.
Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.
Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
Is it really possible to arrive at the truth faster than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to socially declare who was right, but if there weren't some people who could get it right in the absence of overwhelming experimental proof, science would be stuck.
Albert was unusually good at finding the right theory in the presence of only a small amount of experimental evidence. Even more unusually, he admitted it - he claimed to know the theory was right, even in advance of the public proof. It's possible to arrive at the truth by thinking great high-minded thoughts of the sort that Science does not trust you to think, but it's a lot harder than arriving at the truth in the presence of overwhelming evidence.
Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
I looked up to the ideal of a Bayesian superintelligence, not Einstein.
Could you tell if the whole universe were shifted an inch to the left? Could you tell if the whole universe was traveling left at ten miles per hour? Could you tell if the whole universe was accelerating left at ten miles per hour? Could you tell if the whole universe was rotating?
Maybe the reason why we can't observe absolute speeds, absolute positions, absolute accelerations, or absolute rotations, is that particles don't have absolute positions - only positions relative to each other. That is, maybe quantum physics takes place in a relative configuration space.
What time is it? How do you know? The question "What time is it right now?" may make around as much sense as asking "Where is the universe?" Not only that, our physics equations may not need a t in them!
To get rid of time you must reduce it to nontime. In timeless physics, everything that exists is perfectly global or perfectly local. The laws of physics are perfectly global; the configuration space is perfectly local. Every fundamentally existent ontological entity has a unique identity and a unique value. This beauty makes ugly theories much more visibly ugly; a collapse postulate becomes a visible scar on the perfection.
Using the modern, Bayesian formulation of causality, we can define causality without talking about time - define it purely in terms of relations. The river of time never flows, but it has a direction.
There's an unfortunate tendency to talk as if Einstein had superpowers - as if, even before Einstein was famous, he had an inherent disposition to be Einstein - a potential as rare as his fame and as magical as his deeds. Yet the way you acquire superpowers is not by being born with them, but by seeing, with a sudden shock, that they are perfectly normal.
From the world of Initiation Ceremony. Brennan and the others are faced with their midterm exams.
The students are given one month to develop a theory of quantum gravity.
A response to opinions expressed by Robin Hanson, Roger Schank, and others, and arguing against the notion that producing a friendly general artificial intelligence is an insurmountable problem.
A discussion of a number of disagreements Eliezer Yudkowsky has been in, with a few comments on rational disagreement.
You do have to pay attention to other people's authority a fair amount of the time. But above all, try to get the actual right answer. Clever tricks are only valuable if they help you learn what the truth actually is. If a clever argument doesn't actually work, don't use it.
How can you be the same person tomorrow as today, in the river that never flows, when not a drop of water is shared between one time and another? Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left. With a surprising practical application...
Why do a series on quantum mechanics? Some of the many morals that are best illustrated by the tale of quantum mechanics and its misinterpretation.
The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you are within physics. Anything you control is necessarily controlled by physics.
We throw away "time" but retain causality, and with it, the concepts "control" and "decide". To talk of something as having been "always determined" is mixing up a timeless and a timeful conclusion, with paradoxical results. When you take a perspective outside time, you have to be careful not to let your old, timeful intuitions run wild in the absence of their subject matter.
(from The Quantum Physics Sequence)
Playing Devil's Advocate is occasionally helpful, but much less so than it appears. Ultimately, you should only be able to create plausible arguments for things that are actually plausible.
(just the science, for students confused by their physics textbooks)
(quantum physics does not make the universe any more mysterious than it was previously)
An index of posts explaining quantum mechanics and the many-worlds interpretation.
(the many-worlds interpretations wins outright given the current state of evidence)
An index of posts explaining quantum mechanics and the many-worlds interpretation.
A shortened index into the Quantum Physics Sequence describing only the prerequisite knowledge to understand the statement that "science can rule out a notion of personal identity that depends on your being composed of the same atoms - because modern physics has taken the concept of 'same atom' and thrown it out the window. There are no little billiard balls with individual identities. It's experimentally ruled out." The key post in this sequence is Timeless Identity, in which "Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left" but this finale might make little sense without the prior discussion.
(the ontology of quantum mechanics, in which there are no particles with individual identities, rules out theories of personal continuity that invoke "the same atoms" as a concept)
Knowing that you are a deterministic system does not make you any less responsible for the consequences of your actions. You still make your decisions; you do have psychological traits, and experiences, and goals. Determinism doesn't change any of that.
Our sense of "could-ness", as in "I could have not rescued the child from the burning orphanage", comes from our own decision making algorithms labeling some end states as "reachable". If we wanted to achieve the world-state of the child being burned, there is a series of actions that would lead to that state.
There is a school of thought in philosophy that says that even if you make a decision, that still isn't enough to conclude that you have free will. You have to have been the ultimate source of your decision. Nothing else can have influenced it previously. This doesn't work. There is no such thing as "the ultimate source" of your decisions.
When confronted with a difficult question, don't try to point backwards to a misunderstood black box. Ask yourself, what's inside the black box? If the answer is another black box, you likely have a problem.
An illustration of a few ways that trying to perform reductionism can go wrong.
There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.
A comparison of LA-602, the classified report investigating the possibility of a nuclear bomb igniting the atmosphere and killing everyone, and RHIC, the document explaining why the LHC is not going to destroy the world. There is a key difference between these documents: one of them is a genuine discussion of the risks, taking them seriously, and the other is a work of public relations. Work on existential risk needs to be more like the former.
A description of the last several months of sequence posts, that identifies the topic that Eliezer actually wants to explain: morality.
A dialogue on the proper application of the inside and outside views.
Just because two things share surface similarities doesn't mean that they work the same way, or can be expected to be similar in other respects. If you want to understand what something does, it typically doesn't help you to understand something else. That type of reasoning only works if the two things are especially similar, on a deep level.
An introduction to optimization processes and why Yudkowsky thinks that a singularity would be far more powerful than calculations based on human progress would suggest.
Because humans are a sexually reproducing species, human brains are nearly identical. All human beings share similar emotions, tell stories, and employ identical facial expressions. We naively expect all other minds to work like ours, which cause problems when trying to predict the actions of non-human intelligences.
When people talk about "AI", they're talking about an incredibly wide range of possibilities. Having a word like "AI" is like having a word for everything which isn't a duck.
Because minds are physical processes, it is theoretically possible to specify a mind which draws any conclusion in response to any argument. There is no argument that will convince every possible mind.
It is possible to talk about "sexiness" as a property of an observer and a subject. It is also equally possible to talk about "sexiness" as a property of a subject, as long as each observer can have a different process to determine how sexy someone is. Failing to do either of these will cause you trouble.
A few thoughts from Eliezer Yudkowsky about a discussion of sexism on Overcoming Bias.
If your own theory of morality was disproved, and you were persuaded that there was no morality, that everything was permissible and nothing was forbidden, what would you do? Would you still tip cabdrivers?
If there were some great stone tablet upon which Morality was written, and you read it, and it was something horrible, that would be a rather unpleasant scenario. What would you want that tablet to say, if you could choose it? What would be the best case scenario?
Why don't you just do that, and ignore the tablet completely?
There is no computer program so persuasive that you can run it on a rock. A mind, in order to be a mind, needs some sort of dynamic rules of inference or action. A mind has to be created already in motion.
What does "fairness" actually refer to? Why is it "fair" to divide a pie into three equal pieces for three different people?
Key questions for two different moral intuitions: morality-as-preference, and morality-as-given.
A dialogue on the idea that morality is a subset of our desires.
A dialogue on the idea that morality is an absolute external truth.
Eliezer mentions four interpretations of "A man can do as he wills, but not will as he wills.", a quote by Arthur Schopenhauer.
Ultimately, when you reflect on how your mind operates, and consider questions like "why does occam's razor work?" and "why do I expect the future to be like the past?", you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.
A discussion of an interesting kind of lie, in which someone tells a lie that the person they're speaking to knows is a lie, but doesn't know that the person who told the lie knows that they know it's a lie.
A few key differences between Eliezer Yudkowsky's ideas on reflection and the ideas of other philosophers.
The genetic fallacy seems like a strange kind of fallacy. The problem is that the original justification for a belief does not always equal the sum of all the evidence that we currently have available. But, on the other hand, it is very easy for people to still believe untruths from a source that they have since rejected.
There are some things that are so fundamental, that you really can't doubt them effectively. Be careful you don't use this as an excuse, but ultimately, you really can't start out by saying that you won't trust anything that is the output of a neuron.
When we rebel against our own nature, we act in accordance with our own nature. There isn't any other way it could be.
Probabilities exist only in minds. The probability you calculate for winning the lottery depends on your prior, which depends on which mind you have. However, this calculation does not refer to your mind. Thus, your calculated probability is subjectively objective. You conclude that someone who assigns a different probability (given the same information) is objectively wrong: You expect that they will lose on average.
A review of Lawrence Watt-Evans's fiction.
Does moral progress actually happen? And if it does so, how?
How did love ever come into the universe? How did that happen, and how special was it, really?
You do know quite a bit about morality. It's not perfect information, surely, or absolutely reliable, but you have someplace to start. If you didn't, you'd have a much harder time thinking about morality than you do.
As a general rule, if you find yourself suffering from existential angst, check and see if you're not just feeling unhappy because of something else going on in your life. An awful lot of existential angst comes from people trying to solve the wrong problem.
Seeing history in person is a very strong feeling.
There is a chance, however remote, that novel physics experiments could destroy the earth. Is banning physics experiments a good idea?
Our society has a moral norm for applauding "truth", but actual truths get much less applause (this is a bad thing).
When you don't have a numerical procedure to generate probabilities, you're probably better off using your own evolved abilities to reason in the presence of uncertainty.
How can we explain counterfactuals having a truth value, if we don't talk about "nearby possible worlds" or any of the other explanations offered by philosophers?
It really does seem like "2+3=5" is true. Things get confusing if you ask what you mean when you say "2+3=5 is true". But because the simple rules of addition function so well to predict observations, it really does seem like it really must be true.
If, for whatever reason, evolution or education had convinced you to believe that it was moral to do something that you now believe is immoral, you would go around saying "This is moral to do no matter what anyone else thinks of it." How much does this matter?
Discusses the various lines of retreat that have been set up in the discussion on metaethics.
What exactly does a correct theory of metaethics need to look like?
Eliezer's long-awaited theory of meta-ethics.
A few clarifications on how Yudkowsky's theory of metaethics applies to interpersonal interactions.
It's really hard to imagine aliens that are fundamentally different from human beings.
There is a lot of machinery hidden beneath the words, and rationalist's taboo is one way to make a step towards exposing it.
The behaviorists thought that speaking about anything like a mind, or emotions, or thoughts, was unscientific. After all, they said, you can't observe anger. You can just observe behavior. But, it is possible, using empathy, to correctly predict wide varieties of behavior, which you can't account for by Pavlovian conditioning.
Logical positivism was based around the idea that the only meaningful statements were those that could be verified by experiment. Unfortunately for positivism, there are meaningful statements that are very likely true and very likely false, and yet cannot be tested.
Don't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.
Avoid situations, as much as you possibly can, in which optimistic thinking suggests ideas for conscious consideration. In real life problems, if you've done that, you've probably already screwed up.
A clarification about Yudkowsky's metaethics.
Don't go looking for some pure essence of goodness, distinct from, you know, actual good.
A parable about an imaginary society that has arbitrary, alien values.
How can you make errors about morality?
A bit of explanation on the idea of morality as "computation".
When we say that something is arbitrary, we are saying that it feels like it should come with a justification, but doesn't.
When we say that a fair division of pie among N people is for each person to get 1/N of the pie, we aren't being arbitrary. We're being fair.
Humans are built in such a way as to do what is right. Other optimization processes may not. So what?
"Disagreement" between rabbits and foxes is sheer anthropomorphism. Rocks and hot air don't disagree, even though one decreases in elevation and one increases in elevation.
Anthropomorphism didn't become obviously wrong until we realized that the tangled neurons inside the brain were performing complex information processing, and that this complexity arose as a result of evolution.
An explanation, using cartoons, of Lob's theorem.
Lob's theorem provides, by analogy, a nice explanation for why you really can't trust yourself. Don't trust thoughts because you think them, trust them because they were generated by trustworthy rules.
Good things aren't good because humans care about what's good. Good things are good because they save lives, make people happy, give us control over our own lives, involve us with others and prevent us from collapsing into total self-absorption, keep life complex and non-repeating and aesthetic and interesting, etc.
A particular system of values is analyzed, and is used to demonstrate the idea that anytime you consider changing your morals, you do so using your own current meta-morals. Forget this at your peril.
CEV is not the essence of goodness. If functioning properly, it is supposed to work analogously to a mirror -- a mirror is not inherently apple-shaped, but in the presence of an apple, it reflects the image of an apple. In the presence of the Pebblesorters, an AI running CEV would begin transforming the universe into heaps containing prime numbers of pebbles. In the presence of humankind, an AI running CEV would begin doing whatever is right for it to do.
There are some mental categories we draw that are relatively simple and straightforward. Others get trickier, because they are primarily drawn in such a way that whether or not something fits into that category is important information to our utility function. Deciding whether someone is "alive", for instance. Is someone like Terry Schaivo alive? This issue is why, in part, technology creates new moral dilemmas, and why teaching morality to a computer is so hard.
We underestimate the complexity of our own unnatural categories. This doesn't work when you're trying to build a FAI.
Theories of teleology have a few problems. First, theories of teleology often wind up drawing causal arrows from the future to the past. It also leads you to make predictions based on anthropomorphism. Finally, it opens you up to the Mind Projection Fallacy, assuming that the purpose of something is an inherent property of that thing, as opposed to a property of the agent or process that produced it.
It can feel as though you understand how to build an AI, when really, you're still making all your predictions based on empathy. Your AI design will not work until you figure out a way to reduce the mental to the non-mental.
Unfortunately, very little of philosophy is actually helpful in AI research, for a few reasons.
If a choice is hard, that means the alternatives are around equally balanced, right?
Qualitative strategies to achieve friendliness tend to run into difficulty.
Why programming an AI that only answers questions is not a trivial problem, for many of the same reasons that programming an FAI isn't trivial.
The standard visualization for the Prisoner's Dilemma doesn't really work on humans. We can't pretend we're completely selfish.
According to classic game theory, if you know how many iterations there are going to be in the iterated prisoner's dilemma, then you shouldn't use tit for tat. Does this really seem right?
Hollywood seems to model "emotionless" AI's as humans with some slight differences. For the most part, they act as emotionally repressed humans, despite the fact that this is a very unlikely way for AI's to behave.
Don't rule out supernatural explanations because they're supernatural. Test them the way you would test any other hypothesis. And probably, you will find out that they aren't true.
Some of the previous post was incorrect. Psychic powers, if indeed they were ever discovered, would actually be strong evidence in favor of non-reductionism.
A discussion of the concept of optimization.
When Eliezer went into his death spiral around intelligence, he would up making a lot of mistakes that later became very useful.
When Eliezer was quite young, it took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits.
Eliezer's skills at defeating other people's ideas led him to believe that his own (mistaken) ideas must have been correct.
Eliezer's big mistake was when he took a mysterious view of morality.
If you're uncertain about something, communicate that uncertainty. Do so as clearly as you can. You don't help yourself by hiding how confused you are.
If the LHC, or some sort of similar project, continually seemed to fail right before it did something we thought might destroy the world, this is something we should notice.
An illustration of inconsistent probability assignments.
Eliezer started to dig himself out of his philosophical hole when he noticed a tiny inconsistency.
When Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.
Eliezer actually looked back and realized his mistakes when he imagined the idea of an optimization process.
There are people who have acquired more mastery over various fields than Eliezer has over his.
People in higher levels of business, science, etc, often really are there because they're significantly more competent than everyone else.
A lot of AI researchers aren't really all that exceptional. This is a problem, though most people don't seem to see it.
Eliezer considers his training as a rationalist to have started the day he realized just how awfully he had screwed up.
As a human, if you try to try something, you will put much less work into it than if you try something.
A fictional exchange between Mark Hamill and George Lucas over the scene in Empire Strikes Back where Luke Skywalker attempts to lift his X-wing with the force.
Compare the world in which there is a God, who will intervene at some threshold, against a world in which everything happens as a result of physical laws. Which universe looks more like our own?
The story of how Eliezer Yudkowsky became a Bayesian.
A lot of projects seem impossible, meaning that we don't immediately see a way to do them. But after working on them for a long time, they start to look merely extremely difficult.
It takes an extraordinary amount of rationality before you stop making stupid mistakes. Doing better requires making extraordinary efforts.
The ultimate level of attacking a problem is the point at which you simply shut up and solve the impossible problem.
Depiction of crisis of faith in Beisutsukai world.
Jeffreyssai carefully undergoes a crisis of faith.
There are simple evolutionary reasons why power corrupts humans. They can be beaten, though.
Before you start talking about a system of values, try to actually understand the values of that system as believed by its practitioners.
If you want to tell a truly convincing lie, to someone who knows what they're talking about, you either have to lie about lots of specific object level facts, or about more general laws, or about the laws of thought. Lots of the memes out there about how you learn things originally came from people who were trying to convince other people to believe false statements.
Ethics can protect you from your own mistakes, especially when your mistakes are about really fundamental things.
Humans may have a sense of ethical inhibition because various ancestors, who didn't follow ethical norms when they thought they could get away with it, nevertheless got caught.
Are ethical rules simply actions that have a high cost associated with them? Or are they bindings, expected to hold in all situations, no matter the cost otherwise?
Some responses to comments about the idea of Ethical Injunctions.
Everything you are, is inside your brain. But not everything inside your brain is you. You can draw mental separation lines, which can make you more reflective.
The unpredictability of intelligence is a very special and unusual kind of surprise, which is not at all like noise or randomness. There is a weird balance between the unpredictability of actions and the predictability of outcomes.
What does a belief that an agent is intelligent look like? What predictions does it make?
When you make plans, you are trying to steer the future into regions higher in your preference ordering.
To speak of intelligence, rather than optimization power, we need to divide optimization power by the resources needed, or the amount of prior optimization that had to be done on the system.
Could economics help provide a definition and a general measure of intelligence?
There are a few connections between economics and intelligence, so economics might have something to contribute to a definition of intelligence.
A list of abilities that would be amazing if they were magic, or if only a few people had them.
It is possible for humans to create something better than ourselves. It's been done. It's not paradoxical.
When someone asks you why you're doing "X", don't ask yourself why you're doing "X". Ask yourself whether someone should do "X".
Suppose we landed on another planet and found a large metal object that contained wires made of superconductors, and hundreds of tightly matched gears. Would we be able to infer the presence of an optimization process?
Creativity seems to consist of breaking rules, and violating expectations. But there is one rule that cannot be broken: creative solutions must have something good about them. Creativity is a surprise, but most surprises aren't creative.
Facing a random scenario, the correct solution is really not to behave randomly. Faced with an irrational universe, throwing away your rationality won't help.
If a system does better when randomness is added into its processing, then it must somehow have been performing worse than random. And if you can recognize that this is the case, you ought to be able to generate a non-randomized system.
An illustration of a case in Artificial Intelligence in which a randomized algorithm is purported to work better than a non-randomized algorithm, and a discussion of why this is the case.
In most cases, if you say that something isn't working, then you have to specify a new thing that you think could work. You can't just say that you have to not do what you have been doing. If you observe that selling apples isn't working out for you financially, you can't just decide to sell nonapples.
What logic actually does is preserve truth in a model. It says that if all of the premises are true, then this conclusion is indeed true. But that's not all that minds do. There's an awful lot else that you need, before you start actually getting anything like intelligence.
The difference between Logical and Connectionist AIs is portrayed as a grand dichotomy between two different sides of the force. The truth is that they're just two different designs out of many possible ones.
It's very tempting to reason that your invention X will do Y, because it is similar to thing Z, which also does Y. But reality very often ignores this justification for why your new invention will work.
Making analogies to things that have positive or negative connotations is an even better way to make sure you fail.
On cases where the causal factors creating a circumstance are changing, the outside view may be misleading. In that case, the best you can do may just be to take the inside view, but not try to assign predictions that were too precise.
The first replicator was the original black swan. A couple of molecules that, despite not having a particularly good optimization process, could explore new regions of pattern-space. This is an event that would have implications that would have seemed absurd to predict.
Figuring out how to place concepts in categories is an important part of the problem. Before we classify AI into the same group as human intelligence, farming, and industry, we need to think about why we want to put them into that same category.
Trying to derive predictions from a theory that says that sexual reproduction increases the rate of evolution is more difficult than it first appears.
A discussion of some of the classical big steps in the evolution of life, and how they relate to the idea of optimization.
If you hadn't ever seen brains before, but had only seen evolution, you might start making astounding predictions about their ability. You might, for instance, think that creatures with brains might someday be able to create complex machinery in only a millenium.
Cascades, cycles, and insight are three ways in which the development of intelligence appears discontinuous. Cascades are when one development makes more developments possible. Cycles are when completing a process causes that process to be completed more. And insight is when we acquire a chunk of information that makes solving a lot of other problems easier.
If you have a system that gets better at making itself get better, it will appear to discontinuously advance. Add in the ability of intelligences to accomplish tasks which previous intelligences labeled impossible, and you have the potential for dramatic advancement.
The development of the mouse did lead to a productivity increase. But it didn't lead to a major productivity increase at creating future productivity increases. Therefore, the recursive process didn't take off properly.
If you get a small advantage in nanotechnology, that might not be enough to take over the world. But if you use that small advantage in nanotechnology to gain a major advancement in bots, you could gain an extraordinary amount of power very fast.
It is possible to create a singleton that won't do nasty things. This may be preferable to a scenario in which many agents start competing for resources without any way of securing themselves other than constant defense and deterrence.
A list of Ray Kurzweil's predictions for the period 1999-2009.
When you take a process that is capable of making significant progress developing other processes, and turn it on itself, you should either see it flatline, or FOOM. The likelihood of it doing anything that looks like human-scale progress is unbelievably low.
It seems likely that there will be a discontinuity in the process of AI self-improvement around the time when AIs become capable of doing AI theory. A lot of things have to go exactly right in order to get a slow takeoff, and there is no particular reason to expect them all to happen that way.
Yudkowsky's attempt to summarize Hanson's positions, list the possible futures discussed so far, and identify which ones seem most and least likely to Yudkowsky.
The problem with selecting abstractions is that for your data, there are probably lots of abstractions that fit the data equally well. In that case, we need some other way to decide which abstractions are useful.
Sustained strong recursion has a much larger effect on growth than other possible mechanisms for growth.
People's stated reason for a rejection may not be the same as the actual reason for that rejection.
Attempting to create an intelligence without actually understanding what intelligence is, is a common failure mode. If you want to make actual progress, you need to truly understand what it is that you are trying to make.
Yudkowsky's guesses about what the key sticking points in the AI FOOM debate are.
A few tricks Yudkowsky uses to think about the future.
Reasons why aspiring rationalists might still disagree after trading arguments.
Yudkowsky's attempt to summarize what he thinks on the subject of Friendly AI, without providing any of the justifications for what he believes.
Yudkowsky's addition to Hanson's endorsement of cryonics.
Given that we live in a big universe, and that we can't actually determine whether or not a particular person exists (because they will exist anyway in some other Hubble volume or Everett branch), then it makes more sense to care about whether or not people we can influence are having happy lives, than about whether certain people exist in our own local area.
It's rather difficult to imagine a way in which you could create an AI, and not somehow either take over or destroy the world. How can you use unlimited power in such a way that you don't become a malevolent deity, in the Epicurean sense?
Trying to imagine a Eutopia is actually difficult. But it is worth trying.
Fun Theory is an attempt to actually answer questions about eternal boredom that are more often posed and left hanging. Attempts to visualize Utopia are often defeated by standard biases, such as the attempt to imagine a single moment of good news ("You don't have to work anymore!") rather than a typical moment of daily life ten years later. People also believe they should enjoy various activities that they actually don't. But since human values have no supernatural source, it is quite reasonable for us to try to understand what we want. There is no external authority telling us that the future of humanity should not be fun.
Life should not always be made easier for the same reason that video games should not always be made easier. Think in terms of eliminating low-quality work to make way for high-quality work, rather than eliminating all challenge. One needs games that are fun to play and not just fun to win. Life's utility function is over 4D trajectories, not just 3D outcomes. Values can legitimately be over the subjective experience, the objective result, and the challenging process by which it is achieved - the traveller, the destination and the journey.
Are we likely to run out of new challenges, and be reduced to playing the same video game over and over? How large is Fun Space? This depends on how fast you learn; the faster you generalize, the more challenges you see as similar to each other. Learning is fun, but uses up fun; you can't have the same stroke of genius twice. But the more intelligent you are, the more potential insights you can understand; human Fun Space is larger than chimpanzee Fun Space, and not just by a linear factor of our brain size. In a well-lived life, you may need to increase in intelligence fast enough to integrate your accumulating experiences. If so, the rate at which new Fun becomes available to intelligence, is likely to overwhelmingly swamp the amount of time you could spend at that fixed level of intelligence. The Busy Beaver sequence is an infinite series of deep insights not reducible to each other or to any more general insight.
Much of the anomie and disconnect in modern society can be attributed to our spending all day on tasks (like office work) that we didn't evolve to perform (unlike hunting and gathering on the savanna). Thus, many of the tasks we perform all day do not engage our senses - even the most realistic modern video game is not the same level of sensual experience as outrunning a real tiger on the real savanna. Even the best modern video game is low-bandwidth fun - a low-bandwidth connection to a relatively simple challenge, which doesn't fill our brains well as a result. But future entities could have different senses and higher-bandwidth connections to more complicated challenges, even if those challenges didn't exist on the savanna.
Our hunter-gatherer ancestors strung their own bows, wove their own baskets and whittled their own flutes. Part of our alienation from our design environment is the number of tools we use that we don't understand and couldn't make for ourselves. It's much less fun to read something in a book than to discover it for yourself. Specialization is critical to our current civilization. But the future does not have to be a continuation of this trend in which we rely more and more on things outside ourselves which become less and less comprehensible. With a surplus of power, you could begin to rethink the life experience as a road to internalizing new strengths, not just staying alive efficiently through extreme specialization.
People who are not members of a minority group may somehow come to believe that members of this group possess certain traits which seem to "fit". These traits are not required to have any connection to the real traits of that group.
Offering people more choices that differ along many dimensions, may diminish their satisfaction with their final choice. Losses are more painful than the corresponding gains are pleasurable, so people think of the dimensions along which their final choice was inferior, and of all the other opportunities passed up. If you can only choose one dessert, you're likely to be happier choosing from a menu of two than from a menu of fourteen. Refusing tempting choices consumes mental energy and decreases performance on other cognitive tasks. A video game that contained an always-visible easier route through, would probably be less fun to play even if that easier route were deliberately foregone. You can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken. And what if a worse option is taken due to a predictable mistake? There are many ways to harm people by offering them more choices.
It is dangerous to live in an environment in which a single failure of resolve, throughout your entire life, can result in a permanent addiction or in a poor edit of your own brain. For example, a civilization which is constantly offering people tempting ways to shoot off their own feet - for example, offering them a cheap escape into eternal virtual reality, or customized drugs. It requires a constant stern will that may not be much fun. And it's questionable whether a superintelligence that descends from above to offer people huge dangerous temptations that they wouldn't encounter on their own, is helping.
An AI, trying to develop highly accurate models of the people it interacts with, may develop models which are conscious themselves. For ethical reasons, it would be preferable if the AI wasn't creating and destroying people in the course of interpersonal interactions. Resolving this issue requires making some progress on the hard problem of conscious experience. We need some rule which definitely identifies all conscious minds as conscious. We can make do if it still identifies some nonconscious minds as conscious.
Discusses some of the problems of, and justification for, creating AIs that are knowably not conscious / sentient / people / citizens / subjective experiencers. We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living.
Eliezer informs readers that he had accidentally published the previous post, "Nonsentient Optimizers", when it was only halfway done.
As a piece of meta advice for how to act when you have more power than you probably should, avoid doing things that cannot be undone. Creating a new sentient being is one of those things to avoid. If you need to rewrite the source code of a nonsentient optimization process, this is less morally problematic than rewriting the source code of a sentient intelligence who doesn't want to be rewritten. Creating new life forms creates such massive issues that it's really better to just not try, at least until we know a lot more.
C. S. Lewis's Narnia has a problem, and that problem is the super-lion Aslan - who demotes the four human children from the status of main characters, to mere hangers-on while Aslan does all the work. Iain Banks's Culture novels have a similar problem; the humans are mere hangers-on of the superintelligent Minds. We already have strong ethical reasons to prefer to create nonsentient AIs rather than sentient AIs, at least at first. But we may also prefer in just a fun-theoretic sense that we not be overshadowed by hugely more powerful entities occupying a level playing field with us. Entities with human emotional makeups should not be competing on a level playing field with superintelligences - either keep the superintelligences off the playing field, or design the smaller (human-level) minds with a different emotional makeup that doesn't mind being overshadowed.
Robin Dunbar's original calculation showed that the maximum human group size was around 150. But a typical size for a hunter-gatherer band would be 30-50, cohesive online groups peak at 50-60, and small task forces may peak in internal cohesiveness around 7. Our attempt to live in a world of six billion people has many emotional costs: We aren't likely to know our President or Prime Minister, or to have any significant influence over our country's politics, although we go on behaving as if we did. We are constantly bombarded with news about improbably pretty and wealthy individuals. We aren't likely to find a significant profession where we can be the best in our field. But if intelligence keeps increasing, the number of personal relationships we can track will also increase, along with the natural degree of specialization. Eventually there might be a single community of sentients that really was a single community.
Try spending a day doing as many new things as possible.
It may be better to create a world that operates by better rules, that you can understand, so that you can optimize your own future, than to create a world that includes some sort of deity that can be prayed to. The human reluctance to have their future controlled by an outside source is a nontrivial part of morality.
Fun Theory is important for replying to critics of human progress; for inspiring people to keep working on human progress; for refuting religious arguments that the world could possibly have been benevolently designed; for showing that religious Heavens show the signature of the same human biases that torpedo other attempts at Utopia; and for appreciating the great complexity of our values and of a life worth living, which requires a correspondingly strong effort of AI design to create AIs that can play good roles in a good future.
Each part of the human brain is optimized for behaving correctly, assuming that the rest of the brain is operating exactly as expected. Change one part, and the rest of your brain may not work as well. Increasing a human's intelligence is not a trivial problem.
Creating new emotions seems like a desirable aspect of many parts of Fun Theory, but this is not to be trivially postulated. It's the sort of thing best done with superintelligent help, and slowly and conservatively even then. We can illustrate these difficulties by trying to translate the short English phrase "change sex" into a cognitive transformation of extraordinary complexity and many hidden subproblems.
Since the events in video games have no actual long-term consequences, playing a video game is not likely to be nearly as emotionally involving as much less dramatic events in real life. The supposed Utopia of playing lots of cool video games forever, is life as a series of disconnected episodes with no lasting consequences. Our current emotions are bound to activities that were subgoals of reproduction in the ancestral environment - but we now pursue these activities as independent goals regardless of whether they lead to reproduction.
Stories and lives are optimized according to rather different criteria. Advice on how to write fiction will tell you that "stories are about people's pain" and "every scene must end in disaster". I once assumed that it was not possible to write any story about a successful Singularity because the inhabitants would not be in any pain; but something about the final conclusion that the post-Singularity world would contain no stories worth telling seemed alarming. Stories in which nothing ever goes wrong, are painful to read; would a life of endless success have the same painful quality? If so, should we simply eliminate that revulsion via neural rewiring? Pleasure probably does retain its meaning in the absence of pain to contrast it; they are different neural systems. The present world has an imbalance between pain and pleasure; it is much easier to produce severe pain than correspondingly intense pleasure. One path would be to address the imbalance and create a world with more pleasures, and free of the more grindingly destructive and pointless sorts of pain. Another approach would be to eliminate pain entirely. I feel like I prefer the former approach, but I don't know if it can last in the long run.
Humans seem to be on a hedonic treadmill; over time, we adjust to any improvements in our environment - after a month, the new sports car no longer seems quite as wonderful. This aspect of our evolved psychology is not surprising: it is a rare organism in a rare environment whose optimal reproductive strategy is to rest with a smile on its face, feeling happy with what it already has. To entirely delete the hedonic treadmill seems perilously close to tampering with Boredom itself. Is there enough fun in the universe for a transhuman to jog off the treadmill - improve their life continuously, leaping to ever-higher hedonic levels before adjusting to the previous one? Can ever-higher levels of pleasure be created by the simple increase of ever-larger floating-point numbers in a digital pleasure center, or would that fail to have the full subjective quality of happiness? If we continue to bind our pleasures to novel challenges, can we find higher levels of pleasure fast enough, without cheating? The rate at which value can increase as more bits are added, and the rate at which value must increase for eudaimonia, together determine the lifespan of a mind. If minds must use exponentially more resources over time in order to lead a eudaimonic existence, their subjective lifespan is measured in mere millennia even if they can draw on galaxy-sized resources.
If a citizen of the Past were dropped into the Present world, they would be pleasantly surprised along at least some dimensions; they would also be horrified, disgusted, and frightened. This is not because our world has gone wrong, but because it has gone right. A true Future gone right would, realistically, be shocking to us along at least some dimensions. This may help explain why most literary Utopias fail; as George Orwell observed, "they are chiefly concerned with avoiding fuss". Heavens are meant to sound like good news; political utopias are meant to show how neatly their underlying ideas work. Utopia is reassuring, unsurprising, and dull. Eutopia would be scary. (Of course the vast majority of scary things are not Eutopian, just entropic.) Try to imagine a genuinely better world in which you would be out of place - not a world that would make you smugly satisfied at how well all your current ideas had worked. This proved to be a very important exercise when I tried it; it made me realize that all my old proposals had been optimized to sound safe and reassuring.
Utopia and Dystopia both confirm the moral sensibilities you started with; whether the world is a libertarian utopia of government non-interference, or a hellish dystopia of government intrusion and regulation, either way you get to say "Guess I was right all along." To break out of this mold, write down the Utopia, and the Dystopia, and then try to write down the Weirdtopia - an arguably-better world that zogs instead of zigging or zagging. (Judging from the comments, this exercise seems to have mostly failed.)
A pleasant surprise probably has a greater hedonic impact than being told about the same positive event long in advance - hearing about the positive event is good news in the moment of first hearing, but you don't have the gift actually in hand. Then you have to wait, perhaps for a long time, possibly comparing the expected pleasure of the future to the lesser pleasure of the present. This argues that if you have a choice between a world in which the same pleasant events occur, but in the first world you are told about them long in advance, and in the second world they are kept secret until they occur, you would prefer to live in the second world. The importance of hope is widely appreciated - people who do not expect their lives to improve in the future are less likely to be happy in the present - but the importance of vague hope may be understated.
Vagueness usually has a poor name in rationality, but the Future is something about which, in fact, we do not possess strong reliable specific information. Vague (but justified!) hopes may also be hedonically better. But a more important caution for today's world is that highly specific pleasant scenarios can exert a dangerous power over human minds - suck out our emotional energy, make us forget what we don't know, and cause our mere actual lives to pale by comparison. (This post is not about Fun Theory proper, but it contains an important warning about how not to use Fun Theory.)
How should rationalists use their near and far modes of thinking? And how should knowing about near versus far modes influence how we present the things we believe to other people?
"Boredom" is an immensely subtle and important aspect of human values, nowhere near as straightforward as it sounds to a human. We don't want to get bored with breathing or with thinking. We do want to get bored with playing the same level of the same video game over and over. We don't want changing the shade of the pixels in the game to make it stop counting as "the same game". We want a steady stream of novelty, rather than spending most of our time playing the best video game level so far discovered (over and over) and occasionally trying out a different video game level as a new candidate for "best". These considerations would not arise in most utility functions in expected utility maximizers.
Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action - for example, a neuron that fires when you raise your hand or watch someone else raise theirs. We predictively model other minds by putting ourselves in their shoes, which is empathy. But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel. Like "boredom", the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI. Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.
Our sympathy with other minds makes our interpersonal relationships one of the most complex aspects of human existence. Romance, in particular, is more complicated than being nice to friends and kin, negotiating with allies, or outsmarting enemies - it contains aspects of all three. Replacing human romance with anything simpler or easier would decrease the peak complexity of the human species - a major step in the wrong direction, it seems to me. This is my problem with proposals to give people perfect, nonsentient sexual/romantic partners, which I usually refer to as "catgirls" ("catboys"). The human species does have a statistical sex problem: evolution has not optimized the average man to make the average woman happy or vice versa. But there are less sad ways to solve this problem than both genders giving up on each other and retreating to catgirls/catboys.
A fictional short story illustrating some of the ideas in Interpersonal Entanglement above. (Many commenters seemed to like this story, and some said that the ideas were easier to understand in this form.)
What should you do if you think that the world's economy is going to stay bad for a very long time? How could such a scenario happen?
Having a Purpose in Life consistently shows up as something that increases stated well-being. Of course, the problem with trying to pick out "a Purpose in Life" in order to make yourself happier, is that this doesn't take you outside yourself; it's still all about you. To find purpose, you need to turn your eyes outward to look at the world and find things there that you care about - rather than obsessing about the wonderful spiritual benefits you're getting from helping others. In today's world, most of the highest-priority legitimate Causes consist of large groups of people in extreme jeopardy: Aging threatens the old, starvation threatens the poor, extinction risks threaten humanity as a whole. If the future goes right, many and perhaps all such problems will be solved - depleting the stream of victims to be helped. Will the future therefore consist of self-obsessed individuals, with nothing to take them outside themselves? I suggest, though, that even if there were no large groups of people in extreme jeopardy, we would still, looking around, find things outside ourselves that we cared about - friends, family; truth, freedom... Nonetheless, if the Future goes sufficiently well, there will come a time when you could search the whole of civilization, and never find a single person so much in need of help, as dozens you now pass on the street. If you do want to save someone from death, or help a great many people, then act now; your opportunity may not last, one way or another.
Describes some of the many complex considerations that determine what sort of happiness we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.
A brief summary of principles for writing fiction set in a eutopia.
An interesting universe, that would be incomprehensible to the universe today, is what the future looks like if things go right. There are a lot of things that humans value that if you did everything else right, when building an AI, but left out that one thing, the future would wind up looking dull, flat, pointless, or empty. Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.
Future explorers discover an alien civilization, and learns something unpleasant about their civilization.
The true prisoner's dilemma against aliens. The conference struggles to decide the appropriate course of action.
Humanity encounters new aliens that see the existence of pain amongst humans as morally unacceptable.
Akon talks things over with the Confessor, and receives a history lesson.
The Superhappies propose a compromise.
Humanity accepts the Superhappies' bargain.
The Impossible Possible World tries to save humanity.
The last moments aboard the Impossible Possible World.
The cause that drives Yudkowsky isn't Friendly AI, and it isn't even specifically about preserving human values. It's simply about a future that's a lot better than the present.
In the previous couple of months, Overcoming Bias had focused too much on singularity related issues and not enough on rationality. A two month moratorium on the topic of the singularity/intelligence explosion is imposed.
It is possible to convey moral ideas in a clearer way through fiction than through abstract argument. Stories may also help us get closer to thinking about moral issues in near mode. Don't discount moral arguments just because they're written as fiction.
A purely hypothetical scenario about a world containing some authors trying to persuade people of a particular theory, and some authors simply trying to share valuable information.
Evolutionary Psychology and Microeconomics seem to develop different types of cynical theories, and are cynical about different things.
It's worth drawing a sharp boundary between ideas about evolutionary reasons for behavior, and cognitive reasons for behavior.
An experiment comparing expected parental grief at the death of a child at different ages, to the reproductive success rate of children at that age in a hunter gatherer tribe.
A story that seems to point to some major cultural differences.
Much of cynicism seems to be about signaling sophistication, rather than sharing uncommon, true, and important insights.
Much of our culture is the official view, not the idealistic view.
Dividing the world up into "childish" and "mature" is not a useful way to think.
Trying to signal wisdom or maturity by taking a neutral position is very seldom the right course of action.
An earlier post, on the same topic as yesterday's post.
An experiment in which two unprepared subjects play an asymmetric version of the Prisoner's Dilemma. Is the best outcome the one where each player gets as many points as possible, or the one in which each player gets about the same number of points?
Don't say that you'll figure out a solution to the worst case scenario if the worst case scenario happens. Plan it out in advance.
People underestimate the extent to which their own beliefs and attitudes are influenced by their experiences as a child.
The standard theory of efficient markets says that exploitable regularities in the past, shouldn't be exploitable in the future. If everybody knows that "stocks have always gone up", then there's no reason to sell them.
You should try hard and often to test your rationality, but how can you do that?
If it were possible to teach people reliably how to become exceptional, then it would no longer be exceptional.
There are many things we do that we can't easily understand how we do them. Teaching them is therefore a challenge.
Some people who have fallen into self-deception haven't actually deceived themselves. Some of them simply believe that they have deceived themselves, but have not actually done this.
Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.
People often mistake reasons for endorsing a proposition for reasons to believe that proposition.
It may be wise to tell yourself that you will not be able to successfully deceive yourself, because by telling yourself this, you may make it true.
Trying extra hard to believe something seems like Dark Side Epistemology, but what about trying extra hard to accept something that you know is true.
Behind every particular failure of social rationality is a larger and more general failure of social rationality; even if all religious content were deleted tomorrow from all human minds, the larger failures that permit religion would still be present. Religion may serve the function of an asphyxiated canary in a coal mine - getting rid of the canary doesn't get rid of the gas. Even a complete social victory for atheism would only be the beginning of the real work of rationalists. What could you teach people without ever explicitly mentioning religion, that would raise their general epistemic waterline to the point that religion went underwater?
The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible. The level of expertise that most rationalists strive to develop is not on a par with the skills of a professional mathematician - more like that of a strong casual amateur. Self-proclaimed "rationalists" don't seem to get huge amounts of personal mileage out of their craft, and no one sees a problem with this. Yet rationalists get less systematic training in a less systematic context than a first-dan black belt gets in hitting people.
An essay by Gillian Russell on "Epistemic Viciousness in the Martial Arts" generalizes amazingly to possible and actual problems with building a community around rationality. Most notably the extreme dangers associated with "data poverty" - the difficulty of testing the skills in the real world. But also such factors as the sacredness of the dojo, the investment in teachings long-practiced, the difficulty of book learning that leads into the need to trust a teacher, deference to historical masters, and above all, living in data poverty while continuing to act as if the luxury of trust is possible.
The branching schools of "psychotherapy", another domain in which experimental verification was weak (nonexistent, actually), show that an aspiring craft lives or dies by the degree to which it can be tested in the real world. In the absence of that testing, one becomes prestigious by inventing yet another school and having students, rather than excelling at any visible performance criterion. The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness - that there was a family of measures that by golly did validate well against each other. The act of creating a new measurement creates new science; if it's a good measurement, you get good science.
How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, and organizational. A "reputational" test is some real-world problem that tests the ability of a teacher or a school (like running a hedge fund, say) - "keeping it real", but without being able to break down exactly what was responsible for success. An "experimental" test is one that can be run on each of a hundred students (such as a well-validated survey). An "organizational" test is one that can be used to preserve the integrity of organizations by validating individuals or small groups, even in the face of strong incentives to game the test. The strength of solution invented at each level will determine how far the craft of rationality can go in the real world.
Brainstorming verification tests, asking along what dimensions you think you've improved due to "rationality".
When we talk about rationality, we're generally talking about either epistemic rationality (systematic methods of finding out the truth) or instrumental rationality (systematic methods of making the world more like we would like it to be). We can discuss these in the forms of probability theory and decision theory, but this doesn't fully cover the difficulty of being rational as a human. There is a lot more to rationality than just the formal theories.
People hear about a gamble involving a big payoff, and dismiss it as a form of Pascal's Wager. But the size of the payoff is not the flaw in Pascal's Wager. Just because an option has a very large potential payoff does not mean that the probability of getting that payoff is small, or that there are other possibilities that will cancel with it.
What works of fiction are out there that show characters who have acquired their skills at rationality through practice, and who we can watch in the act of employing those skills?
The atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd, aka "the nonconformist cluster", seems to be stunningly bad at coordinating group projects. There are a number of reasons for this, but one of them is that people are as reluctant to speak agreement out loud, as they are eager to voice disagreements - the exact opposite of the situation that obtains in more cohesive and powerful communities. This is not rational either! It is dangerous to be half a rationalist (in general), and this also applies to teaching only disagreement but not agreement, or only lonely defiance but not coordination. The pseudo-rationalist taboo against expressing strong feelings probably doesn't help either.
One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flawed thinking. This makes it very important to tolerate other people's tolerance - to avoid rejecting them because they tolerate people you wouldn't - since otherwise we must all have exactly the same standards of tolerance in order to work together, which is unlikely. Even if someone has a nice word to say about complete lunatics and crackpots - so long as they don't literally believe the same ideas themselves - try to be nice to them? Intolerance of tolerance corresponds to punishment of non-punishers, a very dangerous game-theoretic idiom that can lock completely arbitrary systems in place even when they benefit no one at all.
Paul Graham gets exactly the same accusations about "cults" and "echo chambers" and "coteries" that I do, in exactly the same tone - e.g. comparing the long hours worked by Y Combinator startup founders to the sleep-deprivation tactic used in cults, or claiming that founders were asked to move to the Bay Area startup hub as a cult tactic of separation from friends and family. This is bizarre, considering our relative surface risk factors. It just seems to be a failure mode of the nonconformist community in general. By far the most cultish-looking behavior on Hacker News is people trying to show off how willing they are to disagree with Paul Graham, which, I can personally testify, feels really bizarre when you're the target. Admiring someone shouldn't be so scary - I don't hold back so much when praising e.g. Douglas Hofstadter; in this world there are people who have pulled off awesome feats and it is okay to admire them highly.
Seven thoughts: I can list more than one thing that is awesome; when I think of "Douglas Hofstadter" I am really thinking of his all-time greatest work; the greatest work is not the person; when we imagine other people we are imagining their output, so the real Douglas Hofstadter is the source of "Douglas Hofstadter"; I most strongly get the sensation of awesomeness when I see someone outdoing me overwhelmingly, at some task I've actually tried; we tend to admire unique detailed awesome things and overlook common nondetailed awesome things; religion and its bastard child "spirituality" tends to make us overlook human awesomeness.
There are a lot of bad habits of thought that have developed to defend religious and spiritual experience. They aren't worth saving, even if we discard the original lie. Let's just admit that we were wrong, and enjoy the universe that's actually here.
The game-theoretical puzzle of the Ultimatum game has its reflection in a real-world dilemma: How much do you demand that an existing group adjust toward you, before you will adjust toward it? Our hunter-gatherer instincts will be tuned to groups of 40 with very minimal administrative demands and equal participation, meaning that we underestimate the inertia of larger and more specialized groups and demand too much before joining them. In other groups this resistance can be overcome by affective death spirals and conformity, but rationalists think themselves too good for this - with the result that people in the nonconformist cluster often set their joining prices way way way too high, like an 50-way split with each player demanding 20% of the money. Nonconformists need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness. If an issue isn't worth personally fixing by however much effort it takes, it's not worth a refusal to contribute.
Anyone with a simple and obvious charitable project - responding with food and shelter to a tidal wave in Thailand, say - would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists. For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits. Can no rationalist match the motivation that comes from the irrational fear of Hell? Or does the real story have more to do with the motivating power of physically meeting others who share your cause, and group norms of participating?
Churches serve a role of providing community - but they aren't explicitly optimized for this, because their nominal role is different. If we desire community without church, can we go one better in the course of deleting religion? There's a great deal of work to be done in the world; rationalist communities might potentially organize themselves around good causes, while explicitly optimizing for community.
Many causes benefit particularly from the spread of rationality - because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization. In the case of my own work this effect was strong enough that after years of bogging down I threw up my hands and explicitly recursed on creating rationalists. If such causes can come to terms with not individually capturing all the rationalists they create, then they can mutually benefit from mutual effort on creating rationalists. This cooperation may require learning to shut up about disagreements between such causes, and not fight over priorities, except in specialized venues clearly marked.
Requesting suggestions for an actual survey to be run.
When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all. And in fact, the vast majority of large modern-day institutions simply fail to exist in the first place. This is why funding of Science is largely through money thrown at Science rather than donations from individuals - research isn't a good emotional fit for the rare problems that individuals can manage to coordinate on. In fact very few things are, which is why e.g. 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests.
Omohundro's resource balance principle implies that the inside of any approximately rational system has a common currency of expected utilons. In our world, this common currency is called "money" and it is the unit of how much society cares about something - a brutal yet obvious point. Many people, seeing a good cause, would prefer to help it by donating a few volunteer hours. But this avoids the tremendous gains of comparative advantage, professional specialization, and economies of scale - the reason we're not still in caves, the only way anything ever gets done in this world, the tools grownups use when anyone really cares. Donating hours worked within a professional specialty and paying-customer priority, whether directly, or by donating the money earned to hire other professional specialists, is far more effective than volunteering unskilled hours.
Wealthy philanthropists typically make the mistake of trying to purchase warm fuzzy feelings, status among friends, and actual utilitarian gains, simultaneously; this results in vague pushes along all three dimensions and a mediocre final result. It should be far more effective to spend some money/effort on buying altruistic fuzzies at maximum optimized efficiency (e.g. by helping people in person and seeing the results in person), buying status at maximum efficiency (e.g. by donating to something sexy that you can brag about, regardless of effectiveness), and spending most of your money on expected utilons (chosen through sheer cold-blooded shut-up-and-multiply calculation, without worrying about status or fuzzies).
Trying to breed e.g. egg-laying chickens by individual selection can produce odd side effects on the farm level, since a more dominant hen can produce more egg mass at the expense of other hens. Group selection is nearly impossible in Nature, but easy to impose in the laboratory, and group-selecting hens produced substantial increases in efficiency. Though most of my essays are about individual rationality - and indeed, Traditional Rationality also praises the lone heretic more than evil Authority - the real effectiveness of "rationalists" may end up determined by their performance in groups.
The idea behind the statement "Rationalists should win" is not that rationality will make you invincible. It means that if someone who isn't behaving according to your idea of rationality is outcompeting you, predictably and consistently, you should consider that you're not the one being rational.
The optimality theorems for probability theory and decision theory, are for perfect probability theory and decision theory. There is no theorem that incremental changes toward the ideal, starting from a flawed initial form, must yield incremental progress at each step along the way. Since perfection is unattainable, why dare to try for improvement? But my limited experience with specialized applications suggests that given enough progress, one can achieve huge improvements over baseline - it just takes a lot of progress to get there.
Extremely rare events can create bizarre circumstances in which people may not be able to effectively communicate about improbability.
You can excuse other people's shortcomings on the basis of extenuating circumstances, but you shouldn't do that with yourself.
Many communities feed emotional needs by offering their members someone or something to blame for failure - say, those looters who don't approve of your excellence. You can easily imagine some group of "rationalists" congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down. But this is not how real rationality works - there's no assumption that other agents are rational. We all face unfair tests (and yes, they are unfair to different degrees for different people); and how well you do with your unfair tests, is the test of your existence. Rationality is there to help you win anyway, not to provide a self-handicapping excuse for losing. There are no first-person extenuating circumstances. There is absolutely no point in going down the road of mutual bitterness and consolation, about anything, ever.
This post was not well-received, but the point was to suggest that a student must at some point leave the dojo and test their skills in the real world. The aspiration of an excellent student should not consist primarily of founding their own dojo and having their own students.
The term "playing to win" comes from Sirlin's book and can be described as using every means necessary to win as long as those means are legal within the structure of the game being played.
Aspiring rationalists often vastly overestimate their own ability to optimize other people's lives. They read nineteen webpages offering productivity advice that doesn't work for them... and then encounter the twentieth page, or invent a new method themselves, and wow, it really works - they've discovered the true method. Actually, they've just discovered the one method in twenty that works for them, and their confident advice is no better than randomly selecting one of the twenty blog posts. Other-Optimizing is exceptionally dangerous when you have power over the other person - for then you'll just believe that they aren't trying hard enough.
An intriguing dietary theory which appears to allow some people to lose substantial amounts of weight, but doesn't appear to work at all for others.
The Shangri-La diet works amazingly well for some people, but completely fails for others, for no known reason. Since the diet has a metabolic rationale and is not supposed to require willpower, its failure in my and other cases is unambigiously mysterious. If it required a component of willpower, then I and others might be tempted to blame myself for not having willpower. The art of combating akrasia (willpower failure) has the same sort of mysteries and is in the same primitive state; we don't know the deeper rule that explains why a trick works for one person but not another.
The bystander effect is when groups of people are less likely to take action than an individual. There are a few explanations for why this might be the case.
The causes of bystander apathy are even worse on the Internet. There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination.
Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory. There's a certain concept of "rationality" which says that the rationalists inevitably lose, because the Barbarians believe in a heavenly afterlife if they die in battle, while the rationalists would all individually prefer to stay out of harm's way. So the rationalist civilization is doomed; it is too elegant and civilized to fight the savage Barbarians... And then there's the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...
Analysis of the gender imbalance that appears in "rationalist" communities, suggesting nine possible causes of the effect, and possible corresponding solutions.
I sometimes think of myself as being like the protagonist in a classic SF labyrinth story, wandering further and further into some alien artifact, trying to radio back a description of what I'm seeing, so that I can be followed. But what I'm finding is not just the Way, the thing that lies at the center of the labyrinth; it is also my Way, the path that I would take to come closer to the center, from whatever place I started out. And yet there is still a common thing we are all trying to find. We should be aware that others' shortest paths may not be the same as our own, but this is not the same as giving up the ability to judge or to share.
When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That's what makes a lot of cognitive subtasks so troublesome - you know you're biased but you're not sure how much, and if you keep tweaking you may overcorrect. The danger of underconfidence (overcorrecting for overconfidence) is that you pass up opportunities on which you could have been successful; not challenging difficult enough problems; losing forward momentum and adopting defensive postures; refusing to put the hypothesis of your inability to the test; losing enough hope of triumph to try hard enough to win. You should ask yourself "Does this way of thinking make me stronger, or weaker?"
Good online communities die primarily by refusing to defend themselves, and so it has been since the days of Eternal September. Anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter. A community with internal politics will treat any attempt to impose moderation as a coup attempt (since internal politics seem of far greater import than invading barbarians). In rationalist communities this is probably an instance of underconfidence - mildly competent moderators are probably quite trustworthy to wield the banhammer. On Less Wrong, the community is the moderator (via karma) and you will need to trust yourselves enough to wield the power and keep the garden clear.
I've developed primarily the art of epistemic rationality, in particular, the arts required for advanced cognitive reductionism... arts like distinguishing fake explanations from real ones and avoiding affective death spirals. There is much else that needs developing to create a craft of rationality - fighting akrasia; coordinating groups; teaching, training, verification, and becoming a proper experimental science; developing better introductory literature... And yet it seems to me that there is a beginning barrier to surpass before you can start creating high-quality craft of rationality, having to do with virtually everyone who tries to think lofty thoughts going instantly astray, or indeed even realizing that a craft of rationality exists and that you ought to be studying cognitive science literature to create it. It's my hope that my writings, as partial as they are, will serve to surpass this initial barrier. The rest I leave to you.
Knowledge of this heuristic might be useful in fighting akrasia.
Practical advice is genuinely much, much more useful when it's backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted. (Listed in increasing order of difficulty.) Stripping out the theories and giving the mere advice alone wouldn't have nearly the same impact or even the same message; and oddly enough, translating experiments and math into practical advice seems to be a rare niche activity relative to academia. If there's a distinctive LW style, this is it.
The fact that this final series was on the craft and the community seems to have delivered a push in something of the wrong direction, (a) steering toward conversation about conversation and (b) making present accomplishment pale in the light of grander dreams. Time to go back to practical advice and deep theories, then.
The conclusion of the Beisutsukai series.
Generalization From One Example is a tendency to pay too much attention to the few anecdotal pieces of evidence you experienced, and model some general phenomenon based on them. This is a special case of availability bias, and the way in which the mistake unfolds is closely related to the correspondence bias and the hindsight bias.
Applied scenario about forming priors.
People don't actually remember much of what they know, they only remember how to find it, and the fact that there is something to find. Thus, it's important to know about what's known in various domains, even without knowing the content.
People are offended by grabs for status.