Difference between revisions of "User:PeerInfinity/Scripts/SyncArticleLinks.php/SyncArticleLinksOutput.txt"

From Lesswrongwiki
Jump to: navigation, search
 
Line 18: Line 18:
 
==The following concept pages have an author link that links to an external site:==
 
==The following concept pages have an author link that links to an external site:==
  
 +
*[[Puzzle game index]]
  
  
Line 34: Line 35:
 
==The following article links have a summary available that was not added to the page:==
 
==The following article links have a summary available that was not added to the page:==
  
 +
 +
*[[A sense that more is possible]]
 +
**[http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/ A Sense That More Is Possible] - Why do people seem to care more about systematic methods of punching than systematic methods of thinking?
 +
 +
*[[Adaptation executers]]
 +
**[http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/ Adaptation-Executers, not Fitness-Maximizers] - A central principle of evolutionary biology in general, and [[evolutionary psychology]] in particular.  If we regarded human taste buds as trying to ''maximize fitness'', we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an ''executing adaptation'' - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.
 +
 +
*[[Affect heuristic]]
 +
**[http://lesswrong.com/lw/lg/the_affect_heuristic/ The Affect Heuristic] - Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.
 +
 +
*[[Affective death spiral]]
 +
**[http://lesswrong.com/lw/lz/guardians_of_the_truth/ Guardians of the Truth] - Endorsing a concept of truth is not the same as endorsing a particular belief as eternally, absolutely, knowably true.
 +
 +
*[[Antiprediction]]
 +
**[http://lesswrong.com/lw/if/your_strength_as_a_rationalist/ Your Strength as a Rationalist] - A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.  Your strength as a rationalist is your ability to be more confused by fiction than by reality.  If you are equally good at explaining any outcome, you have zero knowledge.
 +
 +
*[[Arguing by definition]]
 +
**[http://lesswrong.com/lw/nz/arguing_by_definition/ Arguing "By Definition"] - You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition.
 +
**[http://lesswrong.com/lw/nf/the_parable_of_hemlock/ The Parable of Hemlock] - Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
 +
 +
*[[Arguments as soldiers]]
 +
**[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ Policy Debates Should Not Appear One-Sided] - Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.
 +
 +
*[[Availability heuristic]]
 +
**[http://lesswrong.com/lw/j5/availability/ Availability] - Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.
  
 
*[[Bayesian]]
 
*[[Bayesian]]
Line 39: Line 65:
 
**[http://lesswrong.com/lw/qk/that_alien_message/ That Alien Message] - Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an <em>absolute</em> sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
 
**[http://lesswrong.com/lw/qk/that_alien_message/ That Alien Message] - Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an <em>absolute</em> sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
 
**[http://lesswrong.com/lw/qg/changing_the_definition_of_science/ Changing the Definition of Science] - Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
 
**[http://lesswrong.com/lw/qg/changing_the_definition_of_science/ Changing the Definition of Science] - Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
 +
 +
*[[Belief]]
 +
**[http://lesswrong.com/lw/i4/belief_in_belief/ Belief in Belief] - Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an ''invisible'' dragon!"  The remarkable thing is that they know ''in advance'' exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on.  And yet they may honestly ''believe'' they believe there's a dragon in the garage.  They may perhaps believe it is ''virtuous'' to believe there is a dragon in the garage, and believe themselves virtuous.  Even though they ''anticipate as if'' there is no dragon.
 +
 +
*[[Belief as attire]]
 +
**[http://lesswrong.com/lw/i7/belief_as_attire/ Belief as Attire] - When you've stopped anticipating-as-if something, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe.  On the other hand, it is ''very'' easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the ''passion'' of beliefs worn as team-identification attire.
 +
 +
*[[Beliefs require observations]]
 +
**[http://lesswrong.com/lw/ne/the_parable_of_the_dagger/ The Parable of the Dagger] - <i>A word fails to connect to reality in the first place.</i> Is Socrates a framster? Yes or no?
 +
 +
*[[Burch's law]]
 +
**[http://lesswrong.com/lw/h0/burchs_law/ Burch's Law] - Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.
 +
 +
*[[Challenging the Difficult]]
 +
**[http://lesswrong.com/lw/gq/the_proper_use_of_humility/ The Proper Use of Humility] - Use humility to justify further action, not as an excuse for laziness and ignorance.
 +
 +
*[[Chronophone]]
 +
**[http://lesswrong.com/lw/h5/archimedess_chronophone/ Archimedes's Chronophone] - Consider the thought experiment where you communicate general thinking patterns which will lead to right answers, as opposed to pre-hashed content...
 +
**[http://lesswrong.com/lw/h6/chronophone_motivations/ Chronophone Motivations] - If you want to really benefit humanity, do some original thinking, especially about areas of application, and directions of effort.
 +
 +
*[[Color politics]]
 +
**[http://lesswrong.com/lw/gt/a_fable_of_science_and_politics/ A Fable of Science and Politics] - People respond in different ways to clear evidence they're wrong, not always by updating and moving on.
 +
 +
*[[Conjunction fallacy]]
 +
**[http://lesswrong.com/lw/ji/conjunction_fallacy/ Conjunction Fallacy] - Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the <i>conjunction</i> of that thing <i>and</i> another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is [http://lesswrong.com/lw/jj/conjunction_controversy_or_how_they_nail_it_down/ not an isolated artifact] of a particular study design. Debiasing [http://lesswrong.com/lw/jk/burdensome_details/ won't be as simple] as practicing specific questions, it requires certain general habits of thought.
 +
 +
*[[Costs of rationality]]
 +
**[http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/ Making Beliefs Pay Rent (in Anticipated Experiences)] - Not every belief that we have is ''directly about'' sensory experience, but beliefs should ''pay rent'' in anticipations of experience.  For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building.  On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything.  The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"
 +
 +
*[[Dangerous knowledge]]
 +
**[http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ Knowing About Biases Can Hurt People] - Knowing about common biases doesn't help you obtain truth if you only use this knowledge to attack beliefs you don't like.
  
 
*[[Decoherence]]
 
*[[Decoherence]]
Line 46: Line 103:
 
**[http://lesswrong.com/lw/px/decoherent_essences/ Decoherent Essences] - Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
 
**[http://lesswrong.com/lw/px/decoherent_essences/ Decoherent Essences] - Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
 
**[http://lesswrong.com/lw/q4/decoherence_is_falsifiable_and_testable/ Decoherence is Falsifiable and Testable] - (Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
 
**[http://lesswrong.com/lw/q4/decoherence_is_falsifiable_and_testable/ Decoherence is Falsifiable and Testable] - (Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
 +
 +
*[[Detached lever fallacy]]
 +
**[http://lesswrong.com/lw/sp/detached_lever_fallacy/ Detached Lever Fallacy] - There is a lot of machinery hidden beneath the words, and rationalist's taboo is one way to make a step towards exposing it.
 +
 +
*[[Egalitarianism]]
 +
**[http://lesswrong.com/lw/h9/tsuyoku_vs_the_egalitarian_instinct/ Tsuyoku vs. the Egalitarian Instinct] - There may be [[evolutionary psychology|evolutionary psychological]] factors that encourage [[modesty]] and mediocrity, at least in [[signaling|appearance]]; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.
 +
 +
*[[Egan's law]]
 +
**[http://lesswrong.com/lw/qz/living_in_many_worlds/ Living in Many Worlds] - The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of <em>what</em> adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
 +
 +
*[[Emotion]]
 +
**[http://lesswrong.com/lw/go/why_truth_and/ Why truth? And...] - Truth can be instrumentally useful and intrinsically satisfying.
 +
**[http://lesswrong.com/lw/hp/feeling_rational/ Feeling Rational] - Emotions cannot be true or false, but they can follow from true or false beliefs.
 +
 +
*[[Error of crowds]]
 +
**[http://lesswrong.com/lw/hc/the_error_of_crowds/ The Error of Crowds] - Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing *square* error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.
 +
**[http://lesswrong.com/lw/hd/the_majority_is_always_wrong/ The Majority Is Always Wrong] - Often, anything worse than the majority opinion should get selected out...so the majority opinion is often strictly superior to no others.
  
 
*[[Evidence]]
 
*[[Evidence]]
Line 61: Line 135:
 
**[http://lesswrong.com/lw/kz/fake_optimization_criteria/ Fake Optimization Criteria] - Why study evolution?  For one thing - it lets us see an alien [[optimization process]] up close - lets us see the ''real'' consequence of optimizing ''strictly'' for an alien optimization criterion like inclusive genetic fitness.  Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to [[group selection|restrain their breeding]] to live in harmony with prey; the true result is something that humans find less aesthetic.
 
**[http://lesswrong.com/lw/kz/fake_optimization_criteria/ Fake Optimization Criteria] - Why study evolution?  For one thing - it lets us see an alien [[optimization process]] up close - lets us see the ''real'' consequence of optimizing ''strictly'' for an alien optimization criterion like inclusive genetic fitness.  Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to [[group selection|restrain their breeding]] to live in harmony with prey; the true result is something that humans find less aesthetic.
 
**[http://lesswrong.com/lw/lq/fake_utility_functions/ Fake Utility Functions] - Describes the seeming fascination that many have with trying to compress morality down to a single principle.  The [http://lesswrong.com/lw/lp/fake_fake_utility_functions/ sequence leading up] to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 
**[http://lesswrong.com/lw/lq/fake_utility_functions/ Fake Utility Functions] - Describes the seeming fascination that many have with trying to compress morality down to a single principle.  The [http://lesswrong.com/lw/lp/fake_fake_utility_functions/ sequence leading up] to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 +
 +
*[[Fallacy of gray]]
 +
**[http://lesswrong.com/lw/nw/fallacies_of_compression/ Fallacies of Compression] - You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.
 +
 +
*[[Free-floating belief]]
 +
**[http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/ Making Beliefs Pay Rent (in Anticipated Experiences)] - Not every belief that we have is ''directly about'' sensory experience, but beliefs should ''pay rent'' in anticipations of experience.  For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building.  On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything.  The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"
 +
**[http://lesswrong.com/lw/i4/belief_in_belief/ Belief in Belief] - Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an ''invisible'' dragon!"  The remarkable thing is that they know ''in advance'' exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on.  And yet they may honestly ''believe'' they believe there's a dragon in the garage.  They may perhaps believe it is ''virtuous'' to believe there is a dragon in the garage, and believe themselves virtuous.  Even though they ''anticipate as if'' there is no dragon.
  
 
*[[Free will (solution)]]
 
*[[Free will (solution)]]
Line 68: Line 149:
 
let your old, timeful intuitions run wild in the absence of their
 
let your old, timeful intuitions run wild in the absence of their
 
subject matter.
 
subject matter.
 +
 +
*[[Friedman unit]]
 +
**[http://lesswrong.com/lw/hi/futuristic_predictions_as_consumable_goods/ Futuristic Predictions as Consumable Goods] - The Friedman Unit is named after Thomas Friedman who 8 times (between 2003 and 2007) called "the next six months" the critical period in Iraq.  This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.
 +
 +
*[[Fully general counterargument]]
 +
**[http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ Knowing About Biases Can Hurt People] - Knowing about common biases doesn't help you obtain truth if you only use this knowledge to attack beliefs you don't like.
  
 
*[[Futility of chaos]]
 
*[[Futility of chaos]]
 
**[http://lesswrong.com/lw/ks/the_wonder_of_evolution/ The Wonder of Evolution] - ...is not how amazingly well it works, but that it works ''at all'' without a mind, brain, or the ability to think abstractly - that an entirely ''accidental'' process can produce complex designs.  If you talk about how amazingly ''well'' evolution works, you're missing the point.
 
**[http://lesswrong.com/lw/ks/the_wonder_of_evolution/ The Wonder of Evolution] - ...is not how amazingly well it works, but that it works ''at all'' without a mind, brain, or the ability to think abstractly - that an entirely ''accidental'' process can produce complex designs.  If you talk about how amazingly ''well'' evolution works, you're missing the point.
 +
**[http://lesswrong.com/lw/vv/logical_or_connectionist_ai/ Logical or Connectionist AI?] - (The correct answer being "Wrong!")
 +
 +
*[[Generalization from fictional evidence]]
 +
**[http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/ The Logical Fallacy of Generalization from Fictional Evidence] - The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.
  
 
*[[Group selection]]
 
*[[Group selection]]
Line 77: Line 168:
 
**[http://lesswrong.com/lw/l8/conjuring_an_evolution_to_serve_you/ Conjuring An Evolution To Serve You] - If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs.  Sounds logical, right?  But this selection may actually favor the most ''dominant'' hen, that pecked its way to the top of the pecking order at the expense of other hens.  Such breeding programs produce hens that must be housed in individual cages, or they will peck each other to death.  Jeff Skilling of Enron fancied himself an evolution-conjurer - summoning ''the awesome power of evolution'' to work for him - and so, every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses...
 
**[http://lesswrong.com/lw/l8/conjuring_an_evolution_to_serve_you/ Conjuring An Evolution To Serve You] - If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs.  Sounds logical, right?  But this selection may actually favor the most ''dominant'' hen, that pecked its way to the top of the pecking order at the expense of other hens.  Such breeding programs produce hens that must be housed in individual cages, or they will peck each other to death.  Jeff Skilling of Enron fancied himself an evolution-conjurer - summoning ''the awesome power of evolution'' to work for him - and so, every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses...
 
**[http://lesswrong.com/lw/st/anthropomorphic_optimism/ Anthropomorphic Optimism] - You shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer.  It really isn't listening.
 
**[http://lesswrong.com/lw/st/anthropomorphic_optimism/ Anthropomorphic Optimism] - You shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer.  It really isn't listening.
 +
 +
*[[Halo effect]]
 +
**[http://lesswrong.com/lw/lj/the_halo_effect/ The Halo Effect] - Positive qualities <i>seem</i> to correlate with each other, whether or not they ''actually'' do.
 +
 +
*[[Hindsight bias]]
 +
**[http://lesswrong.com/lw/il/hindsight_bias/ Hindsight bias] - Describes the tendency to seem much more likely in hindsight than could have been predicted beforehand.
 +
 +
*[[Hope]]
 +
**[http://lesswrong.com/lw/gx/just_lose_hope_already/ Just Lose Hope Already] - Admit when the evidence goes against you, else things can get a whole lot worse.
  
 
*[[How To Actually Change Your Mind]]
 
*[[How To Actually Change Your Mind]]
Line 85: Line 185:
 
**[http://lesswrong.com/lw/lj/the_halo_effect/ The Halo Effect] - Positive qualities <i>seem</i> to correlate with each other, whether or not they ''actually'' do.
 
**[http://lesswrong.com/lw/lj/the_halo_effect/ The Halo Effect] - Positive qualities <i>seem</i> to correlate with each other, whether or not they ''actually'' do.
 
**[http://lesswrong.com/lw/if/your_strength_as_a_rationalist/ Your Strength as a Rationalist] - A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.  Your strength as a rationalist is your ability to be more confused by fiction than by reality.  If you are equally good at explaining any outcome, you have zero knowledge.
 
**[http://lesswrong.com/lw/if/your_strength_as_a_rationalist/ Your Strength as a Rationalist] - A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.  Your strength as a rationalist is your ability to be more confused by fiction than by reality.  If you are equally good at explaining any outcome, you have zero knowledge.
 +
**[http://lesswrong.com/lw/il/hindsight_bias/ Hindsight bias] - Describes the tendency to seem much more likely in hindsight than could have been predicted beforehand.
 
**[http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/ Positive Bias: Look Into the Dark] - The tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
 
**[http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/ Positive Bias: Look Into the Dark] - The tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
 
**[http://lesswrong.com/lw/kz/fake_optimization_criteria/ Fake Optimization Criteria] - Why study evolution?  For one thing - it lets us see an alien [[optimization process]] up close - lets us see the ''real'' consequence of optimizing ''strictly'' for an alien optimization criterion like inclusive genetic fitness.  Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to [[group selection|restrain their breeding]] to live in harmony with prey; the true result is something that humans find less aesthetic.
 
**[http://lesswrong.com/lw/kz/fake_optimization_criteria/ Fake Optimization Criteria] - Why study evolution?  For one thing - it lets us see an alien [[optimization process]] up close - lets us see the ''real'' consequence of optimizing ''strictly'' for an alien optimization criterion like inclusive genetic fitness.  Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to [[group selection|restrain their breeding]] to live in harmony with prey; the true result is something that humans find less aesthetic.
 
**[http://lesswrong.com/lw/i4/belief_in_belief/ Belief in Belief] - Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an ''invisible'' dragon!"  The remarkable thing is that they know ''in advance'' exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on.  And yet they may honestly ''believe'' they believe there's a dragon in the garage.  They may perhaps believe it is ''virtuous'' to believe there is a dragon in the garage, and believe themselves virtuous.  Even though they ''anticipate as if'' there is no dragon.
 
**[http://lesswrong.com/lw/i4/belief_in_belief/ Belief in Belief] - Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an ''invisible'' dragon!"  The remarkable thing is that they know ''in advance'' exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on.  And yet they may honestly ''believe'' they believe there's a dragon in the garage.  They may perhaps believe it is ''virtuous'' to believe there is a dragon in the garage, and believe themselves virtuous.  Even though they ''anticipate as if'' there is no dragon.
 
**[http://lesswrong.com/lw/s/belief_in_selfdeception/ Belief in Self-Deception] - Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a [[belief in belief|belief in false belief]].
 
**[http://lesswrong.com/lw/s/belief_in_selfdeception/ Belief in Self-Deception] - Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a [[belief in belief|belief in false belief]].
 +
**[http://lesswrong.com/lw/hu/the_third_alternative/ The Third Alternative] - on not skipping the step of looking for additional alternatives
 
**[http://lesswrong.com/lw/hp/feeling_rational/ Feeling Rational] - Emotions cannot be true or false, but they can follow from true or false beliefs.
 
**[http://lesswrong.com/lw/hp/feeling_rational/ Feeling Rational] - Emotions cannot be true or false, but they can follow from true or false beliefs.
  
Line 94: Line 196:
 
**[http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/ How An Algorithm Feels From Inside] - You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain.
 
**[http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/ How An Algorithm Feels From Inside] - You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain.
 
**[http://lesswrong.com/lw/of/dissolving_the_question/ Dissolving the Question] - This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
 
**[http://lesswrong.com/lw/of/dissolving_the_question/ Dissolving the Question] - This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
 +
 +
*[[Humility]]
 +
**[http://lesswrong.com/lw/gq/the_proper_use_of_humility/ The Proper Use of Humility] - Use humility to justify further action, not as an excuse for laziness and ignorance.
 +
 +
*[[Hypocrisy]]
 +
**[http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/ Self-deception: Hypocrisy or Akrasia?] - It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites".  Instead they are suffering from "akrasia" or weakness of will.  At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.
 +
 +
*[[I don't know]]
 +
**[http://lesswrong.com/lw/gs/i_dont_know/ "I don't know."] - You can pragmatically say "I don't know", but you rationally should have a probability distribution.
 +
 +
*[[Illusion of transparency]]
 +
**[http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/ Illusion of Transparency: Why No One Understands You] - Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
 +
**[http://lesswrong.com/lw/o9/words_as_mental_paintbrush_handles/ Words as Mental Paintbrush Handles] - Visualize a "triangular lightbulb". What did you see?
 +
 +
*[[Improper belief]]
 +
**[http://lesswrong.com/lw/i7/belief_as_attire/ Belief as Attire] - When you've stopped anticipating-as-if something, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe.  On the other hand, it is ''very'' easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the ''passion'' of beliefs worn as team-identification attire.
  
 
*[[Inferential distance]]
 
*[[Inferential distance]]
Line 100: Line 218:
 
*[[Joy in the Merely Real]]
 
*[[Joy in the Merely Real]]
 
**[http://lesswrong.com/lw/or/joy_in_the_merely_real/ Joy in the Merely Real] - If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.
 
**[http://lesswrong.com/lw/or/joy_in_the_merely_real/ Joy in the Merely Real] - If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.
 +
**[http://lesswrong.com/lw/p1/initiation_ceremony/ Initiation Ceremony] - Brennan is inducted into the Conspiracy
  
 
*[[Joy in the merely real]]
 
*[[Joy in the merely real]]
 +
**[http://lesswrong.com/lw/oo/explaining_vs_explaining_away/ Explaining vs. Explaining Away] - elementary [[reductionism]].
 
**[http://lesswrong.com/lw/or/joy_in_the_merely_real/ Joy in the Merely Real] - If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.
 
**[http://lesswrong.com/lw/or/joy_in_the_merely_real/ Joy in the Merely Real] - If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.
 +
 +
*[[Magic]]
 +
**[http://lesswrong.com/lw/hq/universal_fire/ Universal Fire] - You can't change just one thing in the world and expect the rest to continue working as before.
  
 
*[[Many-worlds interpretation]]
 
*[[Many-worlds interpretation]]
Line 108: Line 231:
  
 
(the many-worlds interpretations <em>wins outright</em> given the current state of evidence)
 
(the many-worlds interpretations <em>wins outright</em> given the current state of evidence)
 +
 +
*[[Map and Territory (sequence)]]
 +
**[http://lesswrong.com/lw/go/why_truth_and/ Why truth? And...] - Truth can be instrumentally useful and intrinsically satisfying.
 +
 +
*[[Mind-killer]]
 +
**[http://lesswrong.com/lw/gw/politics_is_the_mindkiller/ Politics is the Mind-Killer] - Beware in your discussions that for clear evolutionary reasons, people have great difficulty being rational about current political issues.
 +
 +
*[[Motivated skepticism]]
 +
**[http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ Knowing About Biases Can Hurt People] - Knowing about common biases doesn't help you obtain truth if you only use this knowledge to attack beliefs you don't like.
 +
 +
*[[No safe defense]]
 +
**[http://lesswrong.com/lw/qf/no_safe_defense_not_even_science/ No Safe Defense, Not Even Science] - Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are <em>visibly</em> less than perfect, and so you will not be tempted to trust yourself.
  
 
*[[Oops]]
 
*[[Oops]]
 
**[http://lesswrong.com/lw/gx/just_lose_hope_already/ Just Lose Hope Already] - Admit when the evidence goes against you, else things can get a whole lot worse.
 
**[http://lesswrong.com/lw/gx/just_lose_hope_already/ Just Lose Hope Already] - Admit when the evidence goes against you, else things can get a whole lot worse.
 +
 +
*[[Outside view]]
 +
**[http://lesswrong.com/lw/jg/planning_fallacy/ Planning Fallacy] - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will ''not'' go as expected. As a result, we routinely see outcomes worse then the ''ex ante'' worst case scenario.
 +
 +
*[[Philosophical zombie]]
 +
**[http://lesswrong.com/lw/p7/zombies_zombies/ Zombies! Zombies?] - Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
 +
 +
*[[Planning fallacy]]
 +
**[http://lesswrong.com/lw/jg/planning_fallacy/ Planning Fallacy] - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will ''not'' go as expected. As a result, we routinely see outcomes worse then the ''ex ante'' worst case scenario.
  
 
*[[Policy debates should not appear one-sided]]
 
*[[Policy debates should not appear one-sided]]
Line 116: Line 260:
  
 
*[[Politics is the Mind-Killer]]
 
*[[Politics is the Mind-Killer]]
**[http://lesswrong.com/lw/gw/politics_is_the_mindkiller/ Politics is the Mind-Killer] - Beware in your discussions that for clear evolutionary reasons, people have great difficulty being rational about current political issues.
 
 
**[http://lesswrong.com/lw/gt/a_fable_of_science_and_politics/ A Fable of Science and Politics] - People respond in different ways to clear evidence they're wrong, not always by updating and moving on.  
 
**[http://lesswrong.com/lw/gt/a_fable_of_science_and_politics/ A Fable of Science and Politics] - People respond in different ways to clear evidence they're wrong, not always by updating and moving on.  
 
**[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ Policy Debates Should Not Appear One-Sided] - Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.
 
**[http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ Policy Debates Should Not Appear One-Sided] - Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.
 
**[http://lesswrong.com/lw/h1/the_scales_of_justice_the_notebook_of_rationality/ The Scales of Justice, the Notebook of Rationality] - People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.
 
**[http://lesswrong.com/lw/h1/the_scales_of_justice_the_notebook_of_rationality/ The Scales of Justice, the Notebook of Rationality] - People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.
 
**[http://lesswrong.com/lw/hz/correspondence_bias/ Correspondence Bias] - Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
 
**[http://lesswrong.com/lw/hz/correspondence_bias/ Correspondence Bias] - Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
 +
 +
*[[Positive bias]]
 +
**[http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/ Positive Bias: Look Into the Dark] - The tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
  
 
*[[Possibility]]
 
*[[Possibility]]
 
**[http://lesswrong.com/lw/h4/useless_medical_disclaimers/ Useless Medical Disclaimers] - Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having there, maybe we ought to tax those people.
 
**[http://lesswrong.com/lw/h4/useless_medical_disclaimers/ Useless Medical Disclaimers] - Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having there, maybe we ought to tax those people.
 +
 +
*[[Prediction]]
 +
**[http://lesswrong.com/lw/hi/futuristic_predictions_as_consumable_goods/ Futuristic Predictions as Consumable Goods] - The Friedman Unit is named after Thomas Friedman who 8 times (between 2003 and 2007) called "the next six months" the critical period in Iraq.  This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.
 +
 +
*[[Problem of verifying rationality]]
 +
**[http://lesswrong.com/lw/gn/the_martial_art_of_rationality/ The Martial Art of Rationality] - Rationality is a technique to be trained.
 +
**[http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/ A Sense That More Is Possible] - Why do people seem to care more about systematic methods of punching than systematic methods of thinking?
 +
**[http://lesswrong.com/lw/2s/3_levels_of_rationality_verification/ 3 Levels of Rationality Verification] - Reputational, experimental, and organizational: the different strengths of evidence needed to (a) prevent a school from degenerating, (b) systematically test particular techniques, or (c) credential individuals.
  
 
*[[Rational evidence]]
 
*[[Rational evidence]]
 
**[http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality] - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't <em>trust</em> you to be rational. It wants you to go out and gather overwhelming experimental evidence.
 
**[http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality] - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't <em>trust</em> you to be rational. It wants you to go out and gather overwhelming experimental evidence.
 +
 +
*[[Rationality]]
 +
**[http://lesswrong.com/lw/gn/the_martial_art_of_rationality/ The Martial Art of Rationality] - Rationality is a technique to be trained.
 +
**[http://lesswrong.com/lw/go/why_truth_and/ Why truth? And...] - Truth can be instrumentally useful and intrinsically satisfying.
 +
 +
*[[Reality is normal]]
 +
**[http://lesswrong.com/lw/hs/think_like_reality/ Think Like Reality] - "Quantum physics is not "weird". ''You'' are weird. ''You'' have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is ''your'' problem, not reality's, and ''you'' are the one who needs to change."
 +
 +
*[[Reductionism]]
 +
**[http://lesswrong.com/lw/oo/explaining_vs_explaining_away/ Explaining vs. Explaining Away] - elementary [[reductionism]].
  
 
*[[Reductionism (sequence)]]
 
*[[Reductionism (sequence)]]
**[http://lesswrong.com/lw/hq/universal_fire/ Universal Fire] - You can't change just one thing in the world and expect the rest to continue working as before.
 
**[http://lesswrong.com/lw/hr/universal_law/ Universal Law] - The same laws apply everywhere.
 
 
**[http://lesswrong.com/lw/of/dissolving_the_question/ Dissolving the Question] - This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
 
**[http://lesswrong.com/lw/of/dissolving_the_question/ Dissolving the Question] - This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
 
**[http://lesswrong.com/lw/op/fake_reductionism/ Fake Reductionism] - It takes a detailed step-by-step walkthrough.
 
**[http://lesswrong.com/lw/op/fake_reductionism/ Fake Reductionism] - It takes a detailed step-by-step walkthrough.
 +
 +
*[[Representativeness heuristic]]
 +
**[http://lesswrong.com/lw/ji/conjunction_fallacy/ Conjunction Fallacy] - Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the <i>conjunction</i> of that thing <i>and</i> another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is [http://lesswrong.com/lw/jj/conjunction_controversy_or_how_they_nail_it_down/ not an isolated artifact] of a particular study design. Debiasing [http://lesswrong.com/lw/jk/burdensome_details/ won't be as simple] as practicing specific questions, it requires certain general habits of thought.
  
 
*[[Science]]
 
*[[Science]]
Line 147: Line 312:
 
**[http://lesswrong.com/lw/m9/aschs_conformity_experiment/ Asch's Conformity Experiment] - The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.
 
**[http://lesswrong.com/lw/m9/aschs_conformity_experiment/ Asch's Conformity Experiment] - The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.
 
**[http://lesswrong.com/lw/ma/on_expressing_your_concerns/ On Expressing Your Concerns] - A way of breaking the conformity effect in some cases
 
**[http://lesswrong.com/lw/ma/on_expressing_your_concerns/ On Expressing Your Concerns] - A way of breaking the conformity effect in some cases
 +
 +
*[[Self-deception]]
 +
**[http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/ Self-deception: Hypocrisy or Akrasia?] - It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites".  Instead they are suffering from "akrasia" or weakness of will.  At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.
 +
 +
*[[Separate magisteria]]
 +
**[http://lesswrong.com/lw/gv/outside_the_laboratory/ Outside the Laboratory] - Outside the laboratory: those who understand the map/territory distinction will *integrate* their knowledge, as they see the evidence that reality is a single unified process.
  
 
*[[Slowness of evolution]]
 
*[[Slowness of evolution]]
 
**[http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/ Evolutions Are Stupid (But Work Anyway)] - Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
 
**[http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/ Evolutions Are Stupid (But Work Anyway)] - Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
 +
 +
*[[Standard of evidence]]
 +
**[http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ The Dilemma: Science or Bayes?] - The failure of first-half-of-20th-century-physics was not due to <em>straying</em> from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.
 +
**[http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality] - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't <em>trust</em> you to be rational. It wants you to go out and gather overwhelming experimental evidence.
 +
 +
*[[Statistical bias]]
 +
**[http://lesswrong.com/lw/ha/statistical_bias/ "Statistical Bias"] - There are two types of error, systematic error, and random variance error; by repeating experiments you can average out and drive down the variance error.
 +
**[http://lesswrong.com/lw/hb/useful_statistical_biases/ Useful Statistical Biases] - If you know variance (noise) exists, you can intentionally introduce bias by ignoring some squiggles, choosing a simpler hypothesis, and thereby lowering expected variance while raising expected bias; sometimes total error is lower, hence the "mean-variance tradeoff" technique.
 +
 +
*[[Stupidity of evolution]]
 +
**[http://lesswrong.com/lw/ks/the_wonder_of_evolution/ The Wonder of Evolution] - ...is not how amazingly well it works, but that it works ''at all'' without a mind, brain, or the ability to think abstractly - that an entirely ''accidental'' process can produce complex designs.  If you talk about how amazingly ''well'' evolution works, you're missing the point.
 +
**[http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/ Evolutions Are Stupid (But Work Anyway)] - Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
 +
 +
*[[Superexponential conceptspace]]
 +
**[http://lesswrong.com/lw/o3/superexponential_conceptspace_and_simple_words/ Superexponential Conceptspace, and Simple Words] - You draw an unsimple boundary without any reason to do so.
 +
 +
*[[Superstimulus]]
 +
**[http://lesswrong.com/lw/h3/superstimuli_and_the_collapse_of_western/ Superstimuli and the Collapse of Western Civilization] - As a side effect of evolution, super-stimuli exist, and as a result of economics, are getting and should continue to get worse.
 +
**[http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/ Adaptation-Executers, not Fitness-Maximizers] - A central principle of evolutionary biology in general, and [[evolutionary psychology]] in particular.  If we regarded human taste buds as trying to ''maximize fitness'', we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an ''executing adaptation'' - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.
 +
 +
*[[Surprise]]
 +
**[http://lesswrong.com/lw/hs/think_like_reality/ Think Like Reality] - "Quantum physics is not "weird". ''You'' are weird. ''You'' have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is ''your'' problem, not reality's, and ''you'' are the one who needs to change."
 +
**[http://lesswrong.com/lw/if/your_strength_as_a_rationalist/ Your Strength as a Rationalist] - A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.  Your strength as a rationalist is your ability to be more confused by fiction than by reality.  If you are equally good at explaining any outcome, you have zero knowledge.
 +
 +
*[[Technical explanation]]
 +
**[http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/ Making Beliefs Pay Rent (in Anticipated Experiences)] - Not every belief that we have is ''directly about'' sensory experience, but beliefs should ''pay rent'' in anticipations of experience.  For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building.  On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything.  The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"
 +
 +
*[[The Quantum Physics Sequence]]
 +
**[http://lesswrong.com/lw/on/reductionism/ Reductionism] - We build <em>models</em> of the universe that have many different levels of description. But so far as anyone has been able to determine, the <em>universe itself</em> has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.
 +
**[http://lesswrong.com/lw/p7/zombies_zombies/ Zombies! Zombies?] - Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
 +
**[http://lesswrong.com/lw/pb/belief_in_the_implied_invisible/ Belief in the Implied Invisible] - That it's impossible even in principle to observe something sometimes isn't enough to conclude that it doesn't exist.
 +
**[http://lesswrong.com/lw/pc/quantum_explanations/ Quantum Explanations] - Quantum mechanics doesn't deserve its fearsome reputation.
 +
**[http://lesswrong.com/lw/pd/configurations_and_amplitude/ Configurations and Amplitude] - A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.
 +
**[http://lesswrong.com/lw/pe/joint_configurations/ Joint Configurations] - The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an <em>experimentally testable</em> fact that "Photon 1 here, photon 2 there" is the <em>same configuration</em> as "Photon 2 here, photon 1 there".
 +
**[http://lesswrong.com/lw/pf/distinct_configurations/ Distinct Configurations] - Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.
 +
**[http://lesswrong.com/lw/pg/where_philosophy_meets_science/ Where Philosophy Meets Science] - In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?
 +
**[http://lesswrong.com/lw/ph/can_you_prove_two_particles_are_identical/ Can You Prove Two Particles Are Identical?] - You wouldn't think that it would be possible to do an experiment that told you that two particles are <em>completely</em> identical - not just to the limit of experimental precision, but <em>perfectly.</em> You could even give a precise-sounding philosophical argument for <em>why</em> it was not possible - but the argument would have a deeply buried assumption. Quantum physics violates this deep assumption, making the experiment easy.
 +
**[http://lesswrong.com/lw/pi/classical_configuration_spaces/ Classical Configuration Spaces] - How to visualize the state of a system of two 1-dimensional particles, as a single point in 2-dimensional space. A preliminary step before moving into...
 +
**[http://lesswrong.com/lw/pj/the_quantum_arena/ The Quantum Arena] - Instead of a system state being associated with a single point in a classical configuration space, the instantaneous real state of a quantum system is a complex amplitude distribution over a quantum configuration space. What creates the illusion of "individual particles", like an electron caught in a trap, is a plaid distribution - one that happens to factor into the product of two parts. It is the whole distribution that evolves when a quantum system evolves. Individual configurations don't have physics; amplitude distributions have physics. Quantum entanglement is the general case; quantum <em>independence</em> is the special case.
 +
**[http://lesswrong.com/lw/pk/feynman_paths/ Feynman Paths] - Instead of thinking that a photon takes a single straight path through space, we can regard it as taking all possible paths through space, and adding the amplitudes for every possible path. Nearly all the paths cancel out - unless we do clever quantum things, so that some paths add instead of canceling out. Then we can make light do funny tricks for us, like reflecting off a mirror in such a way that the angle of incidence doesn't equal the angle of reflection. But ordinarily, nearly all the paths except an extremely narrow band, cancel out - this is one of the keys to recovering the hallucination of classical physics.
 +
**[http://lesswrong.com/lw/pl/no_individual_particles/ No Individual Particles] - One of the chief ways to confuse yourself while thinking about quantum mechanics, is to think as if photons were little billiard balls bouncing around. The appearance of little billiard balls is a special case of a deeper level on which there are only multiparticle configurations and amplitude flows. It is easy to set up physical situations in which there exists no fact of the matter as to which electron was originally which.
 +
**[http://lesswrong.com/lw/pp/decoherence/ Decoherence] - A quantum system that factorizes can evolve into a system that doesn't factorize, destroying the illusion of independence. But entangling a quantum system with its environment, can <em>appear</em> to destroy entanglements that are already present. Entanglement with the environment can separate out the pieces of an amplitude distribution, preventing them from interacting with each other. Decoherence is fundamentally symmetric in time, but appears asymmetric because of the second law of thermodynamics.
 +
**[http://lesswrong.com/lw/pq/the_socalled_heisenberg_uncertainty_principle/ The So-Called Heisenberg Uncertainty Principle] - Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum".
 +
**[http://lesswrong.com/lw/pr/which_basis_is_more_fundamental/ Which Basis Is More Fundamental?] - The position basis can be computed locally in the configuration space; the momentum basis is not local. Why care about locality? Because it is a very deep principle; reality itself seems to favor it in some way.
 +
**[http://lesswrong.com/lw/ps/where_physics_meets_experience/ Where Physics Meets Experience] - Meet the Ebborians, who reproduce by fission. The Ebborian brain is like a thick sheet of paper that splits down its thickness. They frequently experience dividing into two minds, and can talk to their other selves. It seems that their unified theory of physics is almost finished, and can answer every question, when one Ebborian asks: When <em>exactly</em> does one Ebborian become two people?
 +
**[http://lesswrong.com/lw/pt/where_experience_confuses_physicists/ Where Experience Confuses Physicists] - It then turns out that the entire planet of Ebbore is splitting along a fourth-dimensional thickness, duplicating all the people within it. But why does the apparent chance of "ending up" in one of those worlds, equal the square of the fourth-dimensional thickness? Many mysterious answers are proposed to this question, and one non-mysterious one.
 +
**[http://lesswrong.com/lw/pu/on_being_decoherent/ On Being Decoherent] - When a sensor measures a particle whose amplitude distribution stretches over space - perhaps seeing if the particle is to the left or right of some dividing line - then the standard laws of quantum mechanics call for the sensor+particle system to evolve into a state of (particle left, sensor measures LEFT) + (particle right, sensor measures RIGHT). But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").
 +
**[http://lesswrong.com/lw/pv/the_conscious_sorites_paradox/ The Conscious Sorites Paradox] - Decoherence is implicit in quantum physics, not an extra law on top of it. Asking exactly when "one world" splits into "two worlds" may be like asking when, if you keep removing grains of sand from a pile, it stops being a "heap". Even if you're inside the world, there may not be a definite answer. This puzzle does not arise only in quantum physics; the Ebborians could face it in a classical universe, or we could build sentient flat computers that split down their thickness. Is this really a physicist's problem?
 +
**[http://lesswrong.com/lw/pw/decoherence_is_pointless/ Decoherence is Pointless] - There is no exact point at which decoherence suddenly happens. All of quantum mechanics is continuous and differentiable, and decoherent processes are no exception to this.
 +
**[http://lesswrong.com/lw/px/decoherent_essences/ Decoherent Essences] - Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
 +
**[http://lesswrong.com/lw/py/the_born_probabilities/ The Born Probabilities] - The last <em>serious </em>mysterious question left in quantum physics: When a quantum world splits in two, why do we seem to have a greater probability of ending up in the larger blob, exactly proportional to the integral of the squared modulus? It's an open problem, but non-mysterious answers have been proposed. Try not to go funny in the head about it.
 +
**[http://lesswrong.com/lw/pz/decoherence_as_projection/ Decoherence as Projection] - Since quantum evolution is linear and unitary, decoherence can be seen as projecting a wavefunction onto orthogonal subspaces. This can be neatly illustrated using polarized photons and the angle of the polarized sheet that will absorb or transmit them.
 +
**[http://lesswrong.com/lw/q0/entangled_photons/ Entangled Photons] - Using our newly acquired understanding of photon polarizations, we see how to construct a quantum state of two photons in which, when you measure one of them, the person in the same world as you, will always find that the opposite photon has opposite quantum state. This is not because any influence is transmitted; it is just decoherence that takes place in a very symmetrical way, as can readily be observed in our calculations.
 +
**[http://lesswrong.com/lw/q1/bells_theorem_no_epr_reality/ Bell's Theorem: No EPR "Reality"] - (Note: This post was designed to be read as a stand-alone, if desired.) Originally, the discoverers of quantum physics thought they had discovered an incomplete description of reality - that there was some deeper physical process they were missing, and this was why they couldn't predict exactly the results of quantum experiments. The math of Bell's Theorem is surprisingly simple, and we walk through it. Bell's Theorem rules out being able to <em>locally</em> predict a <em>single, unique</em> outcome of measurements - ruling out a way that Einstein, Podolsky, and Rosen once defined "reality". This shows how deep implicit philosophical assumptions can go. If worlds can split, so that there is no single unique outcome, then Bell's Theorem is no problem. Bell's Theorem does, however, rule out the idea that quantum physics describes our partial knowledge of a deeper physical state that could locally produce single outcomes - any such description will be inconsistent.
 +
**[http://lesswrong.com/lw/q2/spooky_action_at_a_distance_the_nocommunication/ Spooky Action at a Distance: The No-Communication Theorem] - As Einstein argued long ago, the quantum physics of his era - that is, the single-global-world interpretation of quantum physics, in which experiments have single unique random results - violates Special Relativity; it imposes a preferred space of simultaneity and requires a mysterious influence to be transmitted faster than light; which mysterious influence can never be used to transmit any useful information. Getting rid of the single global world dispels this mystery and puts everything back to normal again.
 +
**[http://lesswrong.com/lw/q5/quantum_nonrealism/ Quantum Non-Realism] - "Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually <em>is</em> a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"
 +
**[http://lesswrong.com/lw/q6/collapse_postulates/ Collapse Postulates] - Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.
 +
**[http://lesswrong.com/lw/q7/if_manyworlds_had_come_first/ If Many-Worlds Had Come First] - If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.
 +
**[http://lesswrong.com/lw/q8/many_worlds_one_best_guess/ Many Worlds, One Best Guess] - Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds <em>wins outright</em> given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.
 +
**[http://lesswrong.com/lw/qz/living_in_many_worlds/ Living in Many Worlds] - The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of <em>what</em> adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
 +
**[http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/ Mach's Principle: Anti-Epiphenomenal Physics] - Could you tell if the whole universe were shifted an inch to the left? Could you tell if the whole universe was traveling left at ten miles per hour? Could you tell if the whole universe was <em>accelerating</em> left at ten miles per hour? Could you tell if the whole universe was rotating?
 +
**[http://lesswrong.com/lw/qo/relative_configuration_space/ Relative Configuration Space] - Maybe the reason why we can't observe absolute speeds, absolute positions, absolute accelerations, or absolute rotations, is that particles don't <em>have</em> absolute positions - only positions relative to each other. That is, maybe quantum physics takes place in a <em>relative</em> configuration space.
 +
**[http://lesswrong.com/lw/qp/timeless_physics/ Timeless Physics] - What time is it? How do you know? The question "What time is it right now?" may make around as much sense as asking "Where is the universe?" Not only that, our physics equations may not need a <em>t</em> in them!
 +
**[http://lesswrong.com/lw/qq/timeless_beauty/ Timeless Beauty] - To get rid of time you must reduce it to nontime. In timeless physics, everything that exists is perfectly global or perfectly local. The laws of physics are perfectly global; the configuration space is perfectly local. Every fundamentally existent ontological entity has a unique identity and a unique value. This beauty makes ugly theories much more visibly ugly; a collapse postulate becomes a visible scar on the perfection.
 +
**[http://lesswrong.com/lw/qr/timeless_causality/ Timeless Causality] - Using the modern, Bayesian formulation of causality, we can define causality without talking about time - define it purely in terms of relations. The river of time never flows, but it has a direction.
 +
**[http://lesswrong.com/lw/qx/timeless_identity/ Timeless Identity] - How can you be the same person tomorrow as today, in the river that never flows, when not a drop of water is shared between one time and another? Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left. With a surprising practical application...
 +
**[http://lesswrong.com/lw/r0/thou_art_physics/ Thou Art Physics] - If the laws of physics control everything we do, then how can our choices be meaningful? Because <em>you are</em> physics. You aren't <em>competing</em> with physics for control of the universe, you are <em>within</em> physics. Anything <em>you</em> control is <em>necessarily</em> controlled by physics.
 +
**[http://lesswrong.com/lw/r1/timeless_control/ Timeless Control] - We throw away "time" but retain causality, and with it, the concepts "control" and "decide". To talk of something as having been "always determined" is mixing up a timeless and a timeful conclusion, with paradoxical results. When you take a perspective outside time, you have to be careful not to
 +
let your old, timeful intuitions run wild in the absence of their
 +
subject matter.
 +
**[http://lesswrong.com/lw/q9/the_failures_of_eld_science/ The Failures of Eld Science] - Fictional portrayal of a potential rationality dojo.
 +
 +
A short story set in the same world as <a href="/lw/p1/initiation_ceremony/">Initiation Ceremony</a>. Future physics students look back on the cautionary tale of quantum physics.
 +
**[http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ The Dilemma: Science or Bayes?] - The failure of first-half-of-20th-century-physics was not due to <em>straying</em> from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.
 +
**[http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality] - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't <em>trust</em> you to be rational. It wants you to go out and gather overwhelming experimental evidence.
 +
**[http://lesswrong.com/lw/qc/when_science_cant_help/ When Science Can't Help] - If you have an idea, Science tells you to test it experimentally. If you spend 10 years testing the idea and the result comes out negative, Science slaps you on the back and says, "Better luck next time." If you want to spend 10 years testing a hypothesis that will actually turn out to be <em>right,</em> you'll have to try to do the thing that Science doesn't trust you to do: think rationally, and figure out the answer <em>before</em> you get clubbed over the head with it.
 +
**[http://lesswrong.com/lw/qd/science_isnt_strict_enough/ Science Isn't Strict Enough] - Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.
 +
**[http://lesswrong.com/lw/qe/do_scientists_already_know_this_stuff/ Do Scientists Already Know This Stuff?] - No. Maybe someday it will be part of standard scientific training, but for now, it's not, and the absence is visible.
 +
**[http://lesswrong.com/lw/qf/no_safe_defense_not_even_science/ No Safe Defense, Not Even Science] - Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are <em>visibly</em> less than perfect, and so you will not be tempted to trust yourself.
 +
**[http://lesswrong.com/lw/qg/changing_the_definition_of_science/ Changing the Definition of Science] - Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
 +
**[http://lesswrong.com/lw/qi/faster_than_science/ Faster Than Science] - Is it really possible to arrive at the truth <em>faster</em> than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to <em>socially</em> declare who was right, but if there weren't <em>some</em> people who could get it right in the absence of overwhelming experimental proof, science would be stuck.
 +
**[http://lesswrong.com/lw/qj/einsteins_speed/ Einstein's Speed] - Albert was unusually good at finding the right theory in the presence of only a small amount of experimental evidence. Even more unusually, he admitted it - he claimed to know the theory was right, even in advance of the public proof. It's possible to arrive at the truth by thinking great high-minded thoughts of the sort that Science does not trust you to think, but it's a <em>lot harder</em> than arriving at the truth in the presence of overwhelming evidence.
 +
**[http://lesswrong.com/lw/qk/that_alien_message/ That Alien Message] - Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an <em>absolute</em> sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
 +
**[http://lesswrong.com/lw/ql/my_childhood_role_model/ My Childhood Role Model] - I looked up to the ideal of a Bayesian superintelligence, not Einstein.
 +
**[http://lesswrong.com/lw/qs/einsteins_superpowers/ Einstein's Superpowers] - There's an unfortunate tendency to talk as if Einstein had superpowers - as if, even before Einstein was famous, he had an inherent disposition to be Einstein - a potential as rare as his fame and as magical as his deeds. Yet the way you acquire superpowers is not by being born with them, but by seeing, with a sudden shock, that they are perfectly normal.
 +
**[http://lesswrong.com/lw/qt/class_project/ Class Project] - From the world of <em>Initiation Ceremony.</em> Brennan and the others are faced with their midterm exams.
 +
**[http://lesswrong.com/lw/qy/why_quantum/ Why Quantum?] - Why do a series on quantum mechanics? Some of the many morals that are best illustrated by the tale of quantum mechanics and its misinterpretation.</div><div class="social-sharing-button">
 +
 +
*[[The top 1% fallacy]]
 +
**[http://lesswrong.com/lw/gy/you_are_not_hiring_the_top_1/ You Are Not Hiring the Top 1%] - Interviewees represent a selection bias on the pool skewed toward those who are not successful or happy in their current jobs.
 +
 +
*[[Third option]]
 +
**[http://lesswrong.com/lw/hu/the_third_alternative/ The Third Alternative] - on not skipping the step of looking for additional alternatives
 +
 +
*[[Traditional rationality]]
 +
**[http://lesswrong.com/lw/q9/the_failures_of_eld_science/ The Failures of Eld Science] - Fictional portrayal of a potential rationality dojo.
 +
 +
A short story set in the same world as <a href="/lw/p1/initiation_ceremony/">Initiation Ceremony</a>. Future physics students look back on the cautionary tale of quantum physics.
 +
 +
*[[Tsuyoku naritai]]
 +
**[http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/ Tsuyoku Naritai! (I Want To Become Stronger)] - Do not glory in your weakness; instead, aspire to become stronger, and study your flaws so as to remove them.
 +
**[http://lesswrong.com/lw/h9/tsuyoku_vs_the_egalitarian_instinct/ Tsuyoku vs. the Egalitarian Instinct] - There may be [[evolutionary psychology|evolutionary psychological]] factors that encourage [[modesty]] and mediocrity, at least in [[signaling|appearance]]; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.
 +
**[http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/ A Sense That More Is Possible] - Why do people seem to care more about systematic methods of punching than systematic methods of thinking?
 +
 +
*[[Underconfidence]]
 +
**[http://lesswrong.com/lw/gs/i_dont_know/ "I don't know."] - You can pragmatically say "I don't know", but you rationally should have a probability distribution.
  
 
*[[Zombies (sequence)]]
 
*[[Zombies (sequence)]]
Line 224: Line 501:
 
*[[Line of retreat]]
 
*[[Line of retreat]]
 
*[[Wrong question]]
 
*[[Wrong question]]
 +
*[[P-zombie]]
 
*[[Disagreement]]
 
*[[Disagreement]]
 
*[[Created already in motion]]
 
*[[Created already in motion]]
 
*[[Genetic fallacy]]
 
*[[Genetic fallacy]]
 +
*[[Yudkowsky's coming of age]]
 
*[[The bottom line]]
 
*[[The bottom line]]
 +
*[[Quotes]]
 
*[[Methods of verifying rationality]]
 
*[[Methods of verifying rationality]]
 
*[[Near/far thinking]]
 
*[[Near/far thinking]]
Line 234: Line 514:
 
==The following concepts are not in the All Articles pages:==
 
==The following concepts are not in the All Articles pages:==
  
*[[A sense that more is possible]]
 
*[[Guessing the teacher's password]]
 
 
*[[Meme lineage]]
 
*[[Meme lineage]]
*[[The Less Wrong Video Game]]
 
*[[The utility function is not up for grabs]]
 
  
  
 
==The following concepts are in the All Articles page, but are redirects:==
 
==The following concepts are in the All Articles page, but are redirects:==
  
 +
*[[P-zombie]]
 
*[[Teacher's password]]
 
*[[Teacher's password]]
 +
*[[Yudkowsky's coming of age]]
  
  
 
==The following articles in the [[Less Wrong/All Articles|All Articles]] index are missing an entry:==
 
==The following articles in the [[Less Wrong/All Articles|All Articles]] index are missing an entry:==
 
*[http://lesswrong.com/lw/i2/two_more_things_to_unlearn_from_school/ Two More Things to Unlearn from School] is missing the following concepts:
 
**[[Guessing the teacher's password]]
 
 
*[http://lesswrong.com/lw/iq/guessing_the_teachers_password/ Guessing the Teacher's Password] is missing the following concepts:
 
**[[Guessing the teacher's password]]
 
 
*[http://lesswrong.com/lw/nb/something_to_protect/ Something to Protect] is missing the following concepts:
 
**[[The utility function is not up for grabs]]
 
 
*[http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ Newcomb's Problem and Regret of Rationality] is missing the following concepts:
 
**[[The utility function is not up for grabs]]
 
 
*[http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/ A Sense That More Is Possible] is missing the following concepts:
 
**[[A sense that more is possible]]
 
 
*[http://lesswrong.com/lw/7i/rationality_is_systematized_winning/ Rationality is Systematized Winning] is missing the following concepts:
 
**[[The utility function is not up for grabs]]
 
 
*[http://lesswrong.com/lw/fg/no_one_knows_stuff/ No One Knows Stuff] is missing the following concepts:
 
**[[A sense that more is possible]]
 
 
*[http://lesswrong.com/lw/182/the_absentminded_driver/ The Absent-Minded Driver] is missing the following concepts:
 
**[[Updateless decision theory]]
 
  
  
Line 280: Line 534:
 
*[[Religion]] is missing the following article links:
 
*[[Religion]] is missing the following article links:
 
**[http://lesswrong.com/lw/gv/outside_the_laboratory/ Outside the Laboratory] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/gv/outside_the_laboratory/ Outside the Laboratory] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
**[http://lesswrong.com/lw/hv/third_alternatives_for_afterlifeism/ Third Alternatives for Afterlife-ism] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
**[http://lesswrong.com/lw/i4/belief_in_belief/ Belief in Belief] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
**[http://lesswrong.com/lw/kr/an_alien_god/ An Alien God] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
**[http://lesswrong.com/lw/o4/leave_a_line_of_retreat/ Leave a Line of Retreat] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
**[http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/ The Second Law of Thermodynamics, and Engines of Cognition] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
  
 
*[[Akrasia]] is missing the following article links:
 
*[[Akrasia]] is missing the following article links:
 
**[http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/ Self-deception: Hypocrisy or Akrasia?] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/ Self-deception: Hypocrisy or Akrasia?] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
*[[Fully general counterargument]] is missing the following article links:
 
**[http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ Knowing About Biases Can Hurt People] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
 
*[[Altruism]] is missing the following article links:
 
**[http://lesswrong.com/lw/n3/circular_altruism/ Circular Altruism] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[Natural world]] is missing the following article links:
 
*[[Natural world]] is missing the following article links:
 
**[http://lesswrong.com/lw/hr/universal_law/ Universal Law] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/hr/universal_law/ Universal Law] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/hs/think_like_reality/ Think Like Reality] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/hs/think_like_reality/ Think Like Reality] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
*[[Bystander effect]] is missing the following article links:
 
**[http://lesswrong.com/lw/ht/beware_the_unsurprised/ Beware the Unsurprised] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[Package-deal fallacy]] is missing the following article links:
 
*[[Package-deal fallacy]] is missing the following article links:
Line 306: Line 556:
 
**[http://lesswrong.com/lw/i2/two_more_things_to_unlearn_from_school/ Two More Things to Unlearn from School] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/i2/two_more_things_to_unlearn_from_school/ Two More Things to Unlearn from School] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/ip/fake_explanations/ Fake Explanations] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/ip/fake_explanations/ Fake Explanations] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
*[[Free-floating belief]] is missing the following article links:
 
**[http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/ Making Beliefs Pay Rent (in Anticipated Experiences)] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[Invisible dragon]] is missing the following article links:
 
*[[Invisible dragon]] is missing the following article links:
Line 318: Line 565:
 
*[[Cheering]] is missing the following article links:
 
*[[Cheering]] is missing the following article links:
 
**[http://lesswrong.com/lw/i6/professing_and_cheering/ Professing and Cheering] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/i6/professing_and_cheering/ Professing and Cheering] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
*[[Belief as attire]] is missing the following article links:
 
**[http://lesswrong.com/lw/ir/science_as_attire/ Science as Attire] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[Religion's claim to be non-disprovable]] is missing the following article links:
 
*[[Religion's claim to be non-disprovable]] is missing the following article links:
Line 330: Line 574:
 
*[[Broadness]] is missing the following article links:
 
*[[Broadness]] is missing the following article links:
 
**[http://lesswrong.com/lw/ic/the_virtue_of_narrowness/ The Virtue of Narrowness] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/ic/the_virtue_of_narrowness/ The Virtue of Narrowness] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
 +
*[[Cached thought]] is missing the following article links:
 +
**[http://lesswrong.com/lw/ic/the_virtue_of_narrowness/ The Virtue of Narrowness] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
**[http://lesswrong.com/lw/d2/cached_procrastination/ Cached Procrastination] by [http://lesswrong.com/user/jimrandomh jimrandomh]
  
 
*[[Probabilistic counterevidence]] is missing the following article links:
 
*[[Probabilistic counterevidence]] is missing the following article links:
Line 385: Line 633:
  
 
*[[Traditional rationality]] is missing the following article links:
 
*[[Traditional rationality]] is missing the following article links:
**[http://lesswrong.com/lw/k1/no_one_can_exempt_you_from_rationalitys_laws/ No One Can Exempt You From Rationality's Laws] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
 
**[http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ The Dilemma: Science or Bayes?] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ The Dilemma: Science or Bayes?] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
Line 425: Line 672:
 
**[http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/ Doublethink (Choosing to be Biased)] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/ Doublethink (Choosing to be Biased)] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
  
*[[Representativeness heuristic]] is missing the following article links:
+
*[[Priming]] is missing the following article links:
**[http://lesswrong.com/lw/ji/conjunction_fallacy/ Conjunction Fallacy] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
+
**[http://lesswrong.com/lw/3b/never_leave_your_room/ Never Leave Your Room] by [http://lesswrong.com/user/Yvain Yvain]
 
 
*[[Privileging the hypothesis]] is missing the following article links:
 
**[http://lesswrong.com/lw/o6/perpetual_motion_beliefs/ Perpetual Motion Beliefs] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[Paradox]] is missing the following article links:
 
*[[Paradox]] is missing the following article links:
 
**[http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ Pascal's Mugging: Tiny Probabilities of Vast Utilities] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ Pascal's Mugging: Tiny Probabilities of Vast Utilities] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/kn/torture_vs_dust_specks/ Torture vs. Dust Specks] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/kn/torture_vs_dust_specks/ Torture vs. Dust Specks] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
*[[Illusion of transparency]] is missing the following article links:
 
**[http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/ Illusion of Transparency: Why No One Understands You] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[General knowledge]] is missing the following article links:
 
*[[General knowledge]] is missing the following article links:
Line 451: Line 692:
 
**[http://lesswrong.com/lw/mb/lonely_dissent/ Lonely Dissent] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/mb/lonely_dissent/ Lonely Dissent] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/md/cultish_countercultishness/ Cultish Countercultishness] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/md/cultish_countercultishness/ Cultish Countercultishness] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
*[[Rationalists should win]] is missing the following article links:
 
**[http://lesswrong.com/lw/nb/something_to_protect/ Something to Protect] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[Free will]] is missing the following article links:
 
*[[Free will]] is missing the following article links:
Line 470: Line 708:
 
**[http://lesswrong.com/lw/pa/gazp_vs_glut/ GAZP vs. GLUT] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/pa/gazp_vs_glut/ GAZP vs. GLUT] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/tv/excluding_the_supernatural/ Excluding the Supernatural] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/tv/excluding_the_supernatural/ Excluding the Supernatural] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
*[[No safe defense]] is missing the following article links:
 
**[http://lesswrong.com/lw/qf/no_safe_defense_not_even_science/ No Safe Defense, Not Even Science] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
  
 
*[[Egan's law]] is missing the following article links:
 
*[[Egan's law]] is missing the following article links:
 
**[http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ The Quantum Physics Sequence] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 
**[http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ The Quantum Physics Sequence] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
 +
*[[Yudkowsky's coming of age]] is missing the following article links:
 +
**[http://lesswrong.com/lw/uk/beyond_the_reach_of_god/ Beyond the Reach of God] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
 +
**[http://lesswrong.com/lw/ul/my_bayesian_enlightenment/ My Bayesian Enlightenment] by [http://lesswrong.com/user/Eliezer_Yudkowsky Eliezer_Yudkowsky]
  
 
*[[Methods of verifying rationality]] is missing the following article links:
 
*[[Methods of verifying rationality]] is missing the following article links:
Line 491: Line 730:
 
==The following See Also links only go one way:==
 
==The following See Also links only go one way:==
  
 +
*[[2-place and 1-place words]] -> [[Anthropomorphism]]
 +
*[[Absolute certainty]] -> [[Bayesian probability]]
 
*[[Absolute certainty]] -> [[Scoring rule]]
 
*[[Absolute certainty]] -> [[Scoring rule]]
 +
*[[Absolute certainty]] -> [[Antiprediction]]
 +
*[[Absolute certainty]] -> [[Mind projection fallacy]]
 +
*[[Absolute certainty]] -> [[Doubt]]
 +
*[[Absurdity heuristic]] -> [[Representativeness heuristic]]
 +
*[[Absurdity heuristic]] -> [[Shut up and multiply]]
 
*[[Absurdity heuristic]] -> [[Exploratory engineering]]
 
*[[Absurdity heuristic]] -> [[Exploratory engineering]]
 +
*[[Affective death spiral]] -> [[Motivated skepticism]]
 
*[[Affective death spiral]] -> [[In-group bias]]
 
*[[Affective death spiral]] -> [[In-group bias]]
 
*[[Affective death spiral]] -> [[Crackpot]]
 
*[[Affective death spiral]] -> [[Crackpot]]
Line 506: Line 753:
 
*[[Anti-epistemology]] -> [[Rationality]]
 
*[[Anti-epistemology]] -> [[Rationality]]
 
*[[Anti-epistemology]] -> [[Affective death spiral]]
 
*[[Anti-epistemology]] -> [[Affective death spiral]]
*[[Antiprediction]] -> [[Making beliefs pay rent]]
+
*[[Antiprediction]] -> [[Privilege the hypothesis]]
 
*[[Arguing by analogy]] -> [[Near/far thinking]]
 
*[[Arguing by analogy]] -> [[Near/far thinking]]
 
*[[Arguing by analogy]] -> [[Fake simplicity]]
 
*[[Arguing by analogy]] -> [[Fake simplicity]]
 +
*[[Arguing by definition]] -> [[Beliefs require observations]]
 +
*[[Arguing by definition]] -> [[Illusion of transparency]]
 +
*[[Arguments as soldiers]] -> [[Policy debates should not appear one-sided]]
 
*[[Arguments as soldiers]] -> [[Adversarial process]]
 
*[[Arguments as soldiers]] -> [[Adversarial process]]
 
*[[Arguments as soldiers]] -> [[Signaling]]
 
*[[Arguments as soldiers]] -> [[Signaling]]
Line 532: Line 782:
 
*[[Belief as attire]] -> [[Curiosity stopper]]
 
*[[Belief as attire]] -> [[Curiosity stopper]]
 
*[[Belief in belief]] -> [[Making beliefs pay rent]]
 
*[[Belief in belief]] -> [[Making beliefs pay rent]]
 +
*[[Beliefs require observations]] -> [[Evidence]]
 
*[[Beliefs require observations]] -> [[Belief update]]
 
*[[Beliefs require observations]] -> [[Belief update]]
 
*[[Beliefs require observations]] -> [[Belief in belief]]
 
*[[Beliefs require observations]] -> [[Belief in belief]]
Line 540: Line 791:
 
*[[Bystander effect]] -> [[Conformity bias]]
 
*[[Bystander effect]] -> [[Conformity bias]]
 
*[[Bystander effect]] -> [[Shut up and multiply]]
 
*[[Bystander effect]] -> [[Shut up and multiply]]
*[[Cached thought]] -> [[Groupthink]]
 
 
*[[Cached thought]] -> [[Information cascade]]
 
*[[Cached thought]] -> [[Information cascade]]
 
*[[Cached thought]] -> [[Status quo bias]]
 
*[[Cached thought]] -> [[Status quo bias]]
Line 549: Line 799:
 
*[[Complexity of value]] -> [[Fun theory]]
 
*[[Complexity of value]] -> [[Fun theory]]
 
*[[Conformity bias]] -> [[Affective death spiral]]
 
*[[Conformity bias]] -> [[Affective death spiral]]
 +
*[[Conformity bias]] -> [[Groupthink]]
 
*[[Conformity bias]] -> [[In-group bias]]
 
*[[Conformity bias]] -> [[In-group bias]]
 
*[[Conformity bias]] -> [[Akrasia]]
 
*[[Conformity bias]] -> [[Akrasia]]
Line 588: Line 839:
 
*[[Decision theory]] -> [[Instrumental rationality]]
 
*[[Decision theory]] -> [[Instrumental rationality]]
 
*[[Decision theory]] -> [[Causality]]
 
*[[Decision theory]] -> [[Causality]]
*[[Decision theory]] -> [[Expected utility]]
+
*[[Decision theory]] -> [[Timeless decision theory]]
 
*[[Decoherence]] -> [[Thermodynamics]]
 
*[[Decoherence]] -> [[Thermodynamics]]
 
*[[Defensibility]] -> [[Third option]]
 
*[[Defensibility]] -> [[Third option]]
Line 598: Line 849:
 
*[[Detached lever fallacy]] -> [[Semantic stopsign]]
 
*[[Detached lever fallacy]] -> [[Semantic stopsign]]
 
*[[Detached lever fallacy]] -> [[Cached thought]]
 
*[[Detached lever fallacy]] -> [[Cached thought]]
*[[Detached lever fallacy]] -> [[Guessing the teacher's password]]
 
 
*[[Doubt]] -> [[Motivated skepticism]]
 
*[[Doubt]] -> [[Motivated skepticism]]
 
*[[Egalitarianism]] -> [[Humility]]
 
*[[Egalitarianism]] -> [[Humility]]
Line 610: Line 860:
 
*[[Epistemic hygiene]] -> [[Improper belief]]
 
*[[Epistemic hygiene]] -> [[Improper belief]]
 
*[[Epistemic hygiene]] -> [[Rational evidence]]
 
*[[Epistemic hygiene]] -> [[Rational evidence]]
 +
*[[Epistemic hygiene]] -> [[Status quo bias]]
 
*[[Error of crowds]] -> [[Wisdom of the crowd]]
 
*[[Error of crowds]] -> [[Wisdom of the crowd]]
 
*[[Everett branch]] -> [[Many-worlds interpretation]]
 
*[[Everett branch]] -> [[Many-worlds interpretation]]
Line 625: Line 876:
 
*[[Existential risk]] -> [[Unfriendly AI]]
 
*[[Existential risk]] -> [[Unfriendly AI]]
 
*[[Existential risk]] -> [[Future]]
 
*[[Existential risk]] -> [[Future]]
 +
*[[Expected utility]] -> [[Allais paradox]]
 +
*[[Expected utility]] -> [[Instrumental rationality]]
 
*[[Extraordinary evidence]] -> [[Belief update]]
 
*[[Extraordinary evidence]] -> [[Belief update]]
 
*[[Fake simplicity]] -> [[Occam's razor]]
 
*[[Fake simplicity]] -> [[Occam's razor]]
Line 652: Line 905:
 
*[[Group selection]] -> [[Evolution]]
 
*[[Group selection]] -> [[Evolution]]
 
*[[Group selection]] -> [[Alienness of evolution]]
 
*[[Group selection]] -> [[Alienness of evolution]]
 +
*[[Groupthink]] -> [[Affective death spiral]]
 +
*[[Guessing the teacher's password]] -> [[Memorization]]
 +
*[[Guessing the teacher's password]] -> [[Understanding]]
 +
*[[Guessing the teacher's password]] -> [[Improper belief]]
 +
*[[Guessing the teacher's password]] -> [[Belief as attire]]
 +
*[[Guessing the teacher's password]] -> [[Separate magisteria]]
 +
*[[Guessing the teacher's password]] -> [[Illusion of transparency]]
 
*[[Halo effect]] -> [[Priming]]
 
*[[Halo effect]] -> [[Priming]]
 
*[[Hedonism]] -> [[Shut up and multiply]]
 
*[[Hedonism]] -> [[Shut up and multiply]]
 
*[[Hedonism]] -> [[Utility function]]
 
*[[Hedonism]] -> [[Utility function]]
 +
*[[Hedonism]] -> [[Complexity of value]]
 
*[[Hope]] -> [[Fuzzies]]
 
*[[Hope]] -> [[Fuzzies]]
 
*[[Hope]] -> [[Shut up and multiply]]
 
*[[Hope]] -> [[Shut up and multiply]]
Line 661: Line 922:
 
*[[How an algorithm feels]] -> [[Magic]]
 
*[[How an algorithm feels]] -> [[Magic]]
 
*[[How an algorithm feels]] -> [[Fake simplicity]]
 
*[[How an algorithm feels]] -> [[Fake simplicity]]
 +
*[[How an algorithm feels]] -> [[P-zombie]]
 
*[[Human universal]] -> [[Evolutionary psychology]]
 
*[[Human universal]] -> [[Evolutionary psychology]]
 
*[[Human universal]] -> [[Fake simplicity]]
 
*[[Human universal]] -> [[Fake simplicity]]
Line 692: Line 954:
 
*[[Intelligence explosion]] -> [[Artificial general intelligence]]
 
*[[Intelligence explosion]] -> [[Artificial general intelligence]]
 
*[[Intelligence explosion]] -> [[Lawful intelligence]]
 
*[[Intelligence explosion]] -> [[Lawful intelligence]]
 +
*[[Joy in discovery]] -> [[Reductionism (sequence)]]
 +
*[[Joy in discovery]] -> [[Fun theory]]
 
*[[Joy in the Merely Real]] -> [[Reality is normal]]
 
*[[Joy in the Merely Real]] -> [[Reality is normal]]
 
*[[Joy in the Merely Real]] -> [[Egan's Law]]
 
*[[Joy in the Merely Real]] -> [[Egan's Law]]
Line 700: Line 964:
 
*[[Least convenient possible world]] -> [[Rationalization]]
 
*[[Least convenient possible world]] -> [[Rationalization]]
 
*[[Litany of Tarski]] -> [[Self-deception]]
 
*[[Litany of Tarski]] -> [[Self-deception]]
*[[Locate the hypothesis]] -> [[Amount of evidence]]
 
 
*[[Loss aversion]] -> [[Prospect theory]]
 
*[[Loss aversion]] -> [[Prospect theory]]
 
*[[Loss aversion]] -> [[Risk aversion]]
 
*[[Loss aversion]] -> [[Risk aversion]]
Line 708: Line 971:
 
*[[Magic]] -> [[Detached lever fallacy]]
 
*[[Magic]] -> [[Detached lever fallacy]]
 
*[[Magic]] -> [[Occam's razor]]
 
*[[Magic]] -> [[Occam's razor]]
*[[Magical categories]] -> [[Complexity of value]]
+
*[[Magic]] -> [[Fake simplicity]]
 +
*[[Magical categories]] -> [[Detached lever fallacy]]
 +
*[[Magical categories]] -> [[Friendly artificial intelligence]]
 
*[[Making beliefs pay rent]] -> [[Beliefs require observations]]
 
*[[Making beliefs pay rent]] -> [[Beliefs require observations]]
 +
*[[Map and Territory (sequence)]] -> [[Map and territory]]
 +
*[[Map and Territory (sequence)]] -> [[Rationality]]
 +
*[[Map and Territory (sequence)]] -> [[Truth]]
 +
*[[Map and Territory (sequence)]] -> [[Evidence]]
 +
*[[Map and Territory (sequence)]] -> [[Priors]]
 +
*[[Map and Territory (sequence)]] -> [[Occam's razor]]
 +
*[[Map and Territory (sequence)]] -> [[Improper belief]]
 +
*[[Map and Territory (sequence)]] -> [[Lawful intelligence]]
 +
*[[Map and territory]] -> [[Map and territory (sequence)]]
 
*[[Map and territory]] -> [[The map is not the territory]]
 
*[[Map and territory]] -> [[The map is not the territory]]
 
*[[Map and territory]] -> [[Epistemic rationality]]
 
*[[Map and territory]] -> [[Epistemic rationality]]
Line 722: Line 996:
 
*[[Mind design space]] -> [[Artificial general intelligence]]
 
*[[Mind design space]] -> [[Artificial general intelligence]]
 
*[[Mind design space]] -> [[Evolution as alien god]]
 
*[[Mind design space]] -> [[Evolution as alien god]]
*[[Mind design space]] -> [[Paperclip maximizer]]
 
 
*[[Mind projection fallacy]] -> [[The map is not the territory]]
 
*[[Mind projection fallacy]] -> [[The map is not the territory]]
 
*[[Modesty]] -> [[Modesty argument]]
 
*[[Modesty]] -> [[Modesty argument]]
 
*[[Modesty argument]] -> [[Disagreement]]
 
*[[Modesty argument]] -> [[Disagreement]]
*[[Motivated skepticism]] -> [[Dangerous knowledge]]
+
*[[Motivated skepticism]] -> [[Least convenient possible world]]
 
*[[Motivated skepticism]] -> [[Filtered evidence]]
 
*[[Motivated skepticism]] -> [[Filtered evidence]]
 
*[[Motivated skepticism]] -> [[Conservation of expected evidence]]
 
*[[Motivated skepticism]] -> [[Conservation of expected evidence]]
 +
*[[Motivated skepticism]] -> [[Color politics]]
 
*[[Motivated skepticism]] -> [[Motivated cognition]]
 
*[[Motivated skepticism]] -> [[Motivated cognition]]
 +
*[[Motivated skepticism]] -> [[Dangerous knowledge]]
 
*[[Nanotechnology]] -> [[Exploratory engineering]]
 
*[[Nanotechnology]] -> [[Exploratory engineering]]
 
*[[Nanotechnology]] -> [[Rational evidence]]
 
*[[Nanotechnology]] -> [[Rational evidence]]
Line 760: Line 1,035:
 
*[[Outside view]] -> [[Absurdity heuristic]]
 
*[[Outside view]] -> [[Absurdity heuristic]]
 
*[[Paperclip maximizer]] -> [[Unfriendly AI]]
 
*[[Paperclip maximizer]] -> [[Unfriendly AI]]
 +
*[[Paperclip maximizer]] -> [[Complexity of value]]
 
*[[Pascal's mugging]] -> [[Expected utility]]
 
*[[Pascal's mugging]] -> [[Expected utility]]
 
*[[Pascal's mugging]] -> [[Utilitarianism]]
 
*[[Pascal's mugging]] -> [[Utilitarianism]]
 
*[[Pascal's mugging]] -> [[Repugnant conclusion]]
 
*[[Pascal's mugging]] -> [[Repugnant conclusion]]
*[[Pascal's mugging]] -> [[Pascal's wager]]
 
 
*[[Peak-end rule]] -> [[Representativeness heuristic]]
 
*[[Peak-end rule]] -> [[Representativeness heuristic]]
 +
*[[Philosophical zombie]] -> [[Zombies (sequence)]]
 +
*[[Philosophical zombie]] -> [[How an algorithm feels]]
 
*[[Phlogiston]] -> [[Reductionism]]
 
*[[Phlogiston]] -> [[Reductionism]]
 
*[[Planning fallacy]] -> [[Near/far thinking]]
 
*[[Planning fallacy]] -> [[Near/far thinking]]
Line 775: Line 1,052:
 
*[[Positive bias]] -> [[Availability bias]]
 
*[[Positive bias]] -> [[Availability bias]]
 
*[[Positive bias]] -> [[Surprise]]
 
*[[Positive bias]] -> [[Surprise]]
*[[Possibility]] -> [[Probability]]
+
*[[Possibility]] -> [[Impossibility]]
 +
*[[Possibility]] -> [[Privileging the hypothesis]]
 
*[[Possibility]] -> [[Absolute certainty]]
 
*[[Possibility]] -> [[Absolute certainty]]
 
*[[Possibility]] -> [[Not technically a lie]]
 
*[[Possibility]] -> [[Not technically a lie]]
Line 792: Line 1,070:
 
*[[Prospect theory]] -> [[Heuristics and biases]]
 
*[[Prospect theory]] -> [[Heuristics and biases]]
 
*[[Prospect theory]] -> [[Decision theory]]
 
*[[Prospect theory]] -> [[Decision theory]]
*[[Prospect theory]] -> [[Expected utility]]
 
 
*[[Quantum mechanics]] -> [[Configuration space]]
 
*[[Quantum mechanics]] -> [[Configuration space]]
 
*[[Quantum mechanics]] -> [[Mangled worlds]]
 
*[[Quantum mechanics]] -> [[Mangled worlds]]
Line 809: Line 1,086:
 
*[[Reductionism]] -> [[Free will]]
 
*[[Reductionism]] -> [[Free will]]
 
*[[Reductionism]] -> [[P-zombie]]
 
*[[Reductionism]] -> [[P-zombie]]
 +
*[[Religion]] -> [[Groupthink]]
 +
*[[Religion]] -> [[Sanity]]
 
*[[Religion]] -> [[Truth]]
 
*[[Religion]] -> [[Truth]]
*[[Religion]] -> [[Sanity]]
 
 
*[[Religion]] -> [[Magic]]
 
*[[Religion]] -> [[Magic]]
 
*[[Scales of justice fallacy]] -> [[Color politics]]
 
*[[Scales of justice fallacy]] -> [[Color politics]]
 +
*[[Scales of justice fallacy]] -> [[Arguments as soldiers]]
 
*[[Scales of justice fallacy]] -> [[Filtered evidence]]
 
*[[Scales of justice fallacy]] -> [[Filtered evidence]]
 
*[[Scales of justice fallacy]] -> [[Narrative fallacy]]
 
*[[Scales of justice fallacy]] -> [[Narrative fallacy]]
Line 830: Line 1,109:
 
*[[Separate magisteria]] -> [[Absurdity heuristic]]
 
*[[Separate magisteria]] -> [[Absurdity heuristic]]
 
*[[Separate magisteria]] -> [[Dangerous knowledge]]
 
*[[Separate magisteria]] -> [[Dangerous knowledge]]
 +
*[[Shut up and multiply]] -> [[Framing effect]]
 +
*[[Shut up and multiply]] -> [[Bite the bullet]]
 +
*[[Shut up and multiply]] -> [[Intuition]]
 
*[[Shut up and multiply]] -> [[Expected utility]]
 
*[[Shut up and multiply]] -> [[Expected utility]]
*[[Shut up and multiply]] -> [[Bite the bullet]]
 
 
*[[Simple math of everything]] -> [[General knowledge]]
 
*[[Simple math of everything]] -> [[General knowledge]]
 
*[[Simple math of everything]] -> [[Technical explanation]]
 
*[[Simple math of everything]] -> [[Technical explanation]]
Line 838: Line 1,119:
 
*[[Statistical bias]] -> [[Bias]]
 
*[[Statistical bias]] -> [[Bias]]
 
*[[Status]] -> [[Signaling]]
 
*[[Status]] -> [[Signaling]]
 +
*[[Superexponential conceptspace]] -> [[Locate the hypothesis]]
 
*[[Superstimulus]] -> [[Evolutionary psychology]]
 
*[[Superstimulus]] -> [[Evolutionary psychology]]
 
*[[Surprise]] -> [[Curiosity]]
 
*[[Surprise]] -> [[Curiosity]]
Line 844: Line 1,126:
 
*[[Sympathetic magic]] -> [[Priming]]
 
*[[Sympathetic magic]] -> [[Priming]]
 
*[[Technical explanation]] -> [[Beliefs require observations]]
 
*[[Technical explanation]] -> [[Beliefs require observations]]
 +
*[[The Fun Theory Sequence]] -> [[Fun theory]]
 +
*[[The Quantum Physics Sequence]] -> [[Quantum mechanics]]
 +
*[[The Quantum Physics Sequence]] -> [[Decoherence]]
 +
*[[The Quantum Physics Sequence]] -> [[Many-worlds interpretation]]
 
*[[The top 1% fallacy]] -> [[Overconfidence]]
 
*[[The top 1% fallacy]] -> [[Overconfidence]]
 +
*[[The utility function is not up for grabs]] -> [[Rationalists should win]]
 +
*[[The utility function is not up for grabs]] -> [[Something to protect]]
 
*[[Third option]] -> [[Color politics]]
 
*[[Third option]] -> [[Color politics]]
 
*[[Third option]] -> [[Black swan]]
 
*[[Third option]] -> [[Black swan]]
Line 868: Line 1,156:
 
*[[Unsupervised universe]] -> [[Human universal]]
 
*[[Unsupervised universe]] -> [[Human universal]]
 
*[[Unsupervised universe]] -> [[Reductionism]]
 
*[[Unsupervised universe]] -> [[Reductionism]]
*[[Updateless decision theory]] -> [[Decision theory]]
 
 
*[[Utilitarianism]] -> [[Metaethics sequence]]
 
*[[Utilitarianism]] -> [[Metaethics sequence]]
*[[Utilitarianism]] -> [[Shut up and multiply]]
 
 
*[[Utilitarianism]] -> [[Game theory]]
 
*[[Utilitarianism]] -> [[Game theory]]
 
*[[Utilitarianism]] -> [[Hedons]]
 
*[[Utilitarianism]] -> [[Hedons]]
Line 879: Line 1,165:
 
*[[Utilons]] -> [[Shut up and multiply]]
 
*[[Utilons]] -> [[Shut up and multiply]]
 
*[[Utilons]] -> [[Fuzzies]]
 
*[[Utilons]] -> [[Fuzzies]]
 +
*[[Wireheading]] -> [[Complexity of value]]
 +
*[[Zombies (sequence)]] -> [[P-zombie]]
 
*[[Zombies (sequence)]] -> [[Reductionism (sequence)]]
 
*[[Zombies (sequence)]] -> [[Reductionism (sequence)]]
  
Line 884: Line 1,172:
  
 
*[[2-place and 1-place words]]
 
*[[2-place and 1-place words]]
 +
*[[A Human's Guide to Words]]
 +
*[[A sense that more is possible]]
 
*[[Absolute certainty]]
 
*[[Absolute certainty]]
 
*[[Absurdity heuristic]]
 
*[[Absurdity heuristic]]
Line 956: Line 1,246:
 
*[[Evolutionary psychology]]
 
*[[Evolutionary psychology]]
 
*[[Existential risk]]
 
*[[Existential risk]]
 +
*[[Expected utility]]
 
*[[Extraordinary evidence]]
 
*[[Extraordinary evidence]]
 
*[[Fake simplicity]]
 
*[[Fake simplicity]]
Line 974: Line 1,265:
 
*[[Group selection]]
 
*[[Group selection]]
 
*[[Groupthink]]
 
*[[Groupthink]]
 +
*[[Guessing the teacher's password]]
 
*[[Halo effect]]
 
*[[Halo effect]]
 
*[[Hard takeoff]]
 
*[[Hard takeoff]]
Line 1,009: Line 1,301:
 
*[[Making beliefs pay rent]]
 
*[[Making beliefs pay rent]]
 
*[[Many-worlds interpretation]]
 
*[[Many-worlds interpretation]]
 +
*[[Map and Territory (sequence)]]
 
*[[Map and territory]]
 
*[[Map and territory]]
 
*[[Marginally zero-sum game]]
 
*[[Marginally zero-sum game]]
Line 1,033: Line 1,326:
 
*[[Other-optimizing]]
 
*[[Other-optimizing]]
 
*[[Outside view]]
 
*[[Outside view]]
*[[P-zombie]]
 
 
*[[Paperclip maximizer]]
 
*[[Paperclip maximizer]]
 
*[[Paranoid debating]]
 
*[[Paranoid debating]]
Line 1,039: Line 1,331:
 
*[[Peak-end rule]]
 
*[[Peak-end rule]]
 
*[[Perceptual control theory]]
 
*[[Perceptual control theory]]
 +
*[[Philosophical zombie]]
 
*[[Phlogiston]]
 
*[[Phlogiston]]
 
*[[Planning fallacy]]
 
*[[Planning fallacy]]
Line 1,053: Line 1,346:
 
*[[Problem of verifying rationality]]
 
*[[Problem of verifying rationality]]
 
*[[Prospect theory]]
 
*[[Prospect theory]]
 +
*[[Puzzle game index]]
 
*[[Quantum mechanics]]
 
*[[Quantum mechanics]]
 
*[[Rational evidence]]
 
*[[Rational evidence]]
Line 1,076: Line 1,370:
 
*[[Shut up and multiply]]
 
*[[Shut up and multiply]]
 
*[[Simple math of everything]]
 
*[[Simple math of everything]]
 +
*[[Singularity paper]]
 
*[[Slowness of evolution]]
 
*[[Slowness of evolution]]
 
*[[Something to protect]]
 
*[[Something to protect]]
Line 1,084: Line 1,379:
 
*[[Stupidity]]
 
*[[Stupidity]]
 
*[[Stupidity of evolution]]
 
*[[Stupidity of evolution]]
 +
*[[Superexponential conceptspace]]
 
*[[Superstimulus]]
 
*[[Superstimulus]]
 
*[[Surprise]]
 
*[[Surprise]]
Line 1,089: Line 1,385:
 
*[[Technical explanation]]
 
*[[Technical explanation]]
 
*[[Teleology]]
 
*[[Teleology]]
 +
*[[The Craft and the Community]]
 +
*[[The Fun Theory Sequence]]
 +
*[[The Hanson-Yudkowsky AI-Foom Debate]]
 +
*[[The Less Wrong Video Game]]
 +
*[[The Quantum Physics Sequence]]
 
*[[The top 1% fallacy]]
 
*[[The top 1% fallacy]]
 +
*[[The utility function is not up for grabs]]
 
*[[Third option]]
 
*[[Third option]]
 
*[[Timeless decision theory]]
 
*[[Timeless decision theory]]
Line 1,106: Line 1,408:
 
*[[Utilons]]
 
*[[Utilons]]
 
*[[Valley of bad rationality]]
 
*[[Valley of bad rationality]]
*[[Yudkowsky's coming of age]]
+
*[[Wireheading]]
 +
*[[Yudkowsky's Coming of Age]]
 
*[[Zombies (sequence)]]
 
*[[Zombies (sequence)]]
  
Line 1,116: Line 1,419:
 
*[[Less Wrong/2008 Articles]]
 
*[[Less Wrong/2008 Articles]]
 
*[[Less Wrong/2009 Articles]]
 
*[[Less Wrong/2009 Articles]]
 +
*[[Less Wrong/2010 Articles]]
  
  
Line 1,124: Line 1,428:
 
*[[Less Wrong/2008 Articles/Summaries]]
 
*[[Less Wrong/2008 Articles/Summaries]]
 
*[[Less Wrong/2009 Articles/Summaries]]
 
*[[Less Wrong/2009 Articles/Summaries]]
 +
*[[Less Wrong/2010 Articles/Summaries]]

Latest revision as of 09:26, 5 January 2010

The following concept pages have comments:

The following concept pages have the "Overcoming Bias Articles" header:

The following concept pages have "External references" instead of "References":

The following concept pages have a miscapitalized "See Also" header:

The following concept pages have an author link that links to an external site:


The following concept pages have an extra newline after the wikilink template:

The following concept pages have the See Also section before the Blog Posts section:

The following article links have a wrong or improperly formatted title:

The following article links have a summary available that was not added to the page:

  • Adaptation executers
    • Adaptation-Executers, not Fitness-Maximizers - A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.
  • Antiprediction
    • Your Strength as a Rationalist - A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
  • Arguing by definition
    • Arguing "By Definition" - You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition.
    • The Parable of Hemlock - Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
  • Availability heuristic
    • Availability - Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.
  • Bayesian
    • Qualitatively Confused - Using qualitative, binary reasoning may make it easier to confuse belief and reality; if we use probability distributions, the distinction is much clearer.
    • That Alien Message - Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
    • Changing the Definition of Science - Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
  • Belief
    • Belief in Belief - Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.
  • Belief as attire
    • Belief as Attire - When you've stopped anticipating-as-if something, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.
  • Burch's law
    • Burch's Law - Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.
  • Chronophone
    • Archimedes's Chronophone - Consider the thought experiment where you communicate general thinking patterns which will lead to right answers, as opposed to pre-hashed content...
    • Chronophone Motivations - If you want to really benefit humanity, do some original thinking, especially about areas of application, and directions of effort.
  • Conjunction fallacy
    • Conjunction Fallacy - Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions, it requires certain general habits of thought.
  • Costs of rationality
    • Making Beliefs Pay Rent (in Anticipated Experiences) - Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"
  • Decoherence
    • Decoherence - A quantum system that factorizes can evolve into a system that doesn't factorize, destroying the illusion of independence. But entangling a quantum system with its environment, can appear to destroy entanglements that are already present. Entanglement with the environment can separate out the pieces of an amplitude distribution, preventing them from interacting with each other. Decoherence is fundamentally symmetric in time, but appears asymmetric because of the second law of thermodynamics.
    • On Being Decoherent - When a sensor measures a particle whose amplitude distribution stretches over space - perhaps seeing if the particle is to the left or right of some dividing line - then the standard laws of quantum mechanics call for the sensor+particle system to evolve into a state of (particle left, sensor measures LEFT) + (particle right, sensor measures RIGHT). But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").
    • Decoherence is Pointless - There is no exact point at which decoherence suddenly happens. All of quantum mechanics is continuous and differentiable, and decoherent processes are no exception to this.
    • Decoherent Essences - Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
    • Decoherence is Falsifiable and Testable - (Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
  • Egan's law
    • Living in Many Worlds - The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
  • Error of crowds
    • The Error of Crowds - Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing *square* error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.
    • The Majority Is Always Wrong - Often, anything worse than the majority opinion should get selected out...so the majority opinion is often strictly superior to no others.
  • Evidence
    • That Alien Message - Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
  • Evolution as alien god
    • An Alien God - Evolution is awesomely powerful, unbelievably stupid, incredibly slow, monomaniacally singleminded, irrevocably splintered in focus, blindly shortsighted, and itself a completely accidental process. If evolution were a god, it would not be Jehovah, but H. P. Lovecraft's Azathoth, the blind idiot God burbling chaotically at the center of everything.
  • Evolutionary psychology
    • Adaptation-Executers, not Fitness-Maximizers - A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.
    • Thou Art Godshatter - Describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.
    • Detached Lever Fallacy - There is a lot of machinery hidden beneath the words, and rationalist's taboo is one way to make a step towards exposing it.
  • Fake simplicity
    • Fake Optimization Criteria - Why study evolution? For one thing - it lets us see an alien optimization process up close - lets us see the real consequence of optimizing strictly for an alien optimization criterion like inclusive genetic fitness. Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to restrain their breeding to live in harmony with prey; the true result is something that humans find less aesthetic.
    • Fake Utility Functions - Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.
  • Fallacy of gray
    • Fallacies of Compression - You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.
  • Free-floating belief
    • Making Beliefs Pay Rent (in Anticipated Experiences) - Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"
    • Belief in Belief - Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.
  • Free will (solution)
    • Timeless Causality - Using the modern, Bayesian formulation of causality, we can define causality without talking about time - define it purely in terms of relations. The river of time never flows, but it has a direction.
    • Thou Art Physics - If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you are within physics. Anything you control is necessarily controlled by physics.
    • Timeless Control - We throw away "time" but retain causality, and with it, the concepts "control" and "decide". To talk of something as having been "always determined" is mixing up a timeless and a timeful conclusion, with paradoxical results. When you take a perspective outside time, you have to be careful not to

let your old, timeful intuitions run wild in the absence of their subject matter.

  • Friedman unit
    • Futuristic Predictions as Consumable Goods - The Friedman Unit is named after Thomas Friedman who 8 times (between 2003 and 2007) called "the next six months" the critical period in Iraq. This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.
  • Futility of chaos
    • The Wonder of Evolution - ...is not how amazingly well it works, but that it works at all without a mind, brain, or the ability to think abstractly - that an entirely accidental process can produce complex designs. If you talk about how amazingly well evolution works, you're missing the point.
    • Logical or Connectionist AI? - (The correct answer being "Wrong!")
  • Generalization from fictional evidence
    • The Logical Fallacy of Generalization from Fictional Evidence - The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.
  • Group selection
    • Evolving to Extinction - Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool. It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to evolve to extinction.
    • The Tragedy of Group Selectionism - A tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.
    • Conjuring An Evolution To Serve You - If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs. Sounds logical, right? But this selection may actually favor the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens. Such breeding programs produce hens that must be housed in individual cages, or they will peck each other to death. Jeff Skilling of Enron fancied himself an evolution-conjurer - summoning the awesome power of evolution to work for him - and so, every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses...
    • Anthropomorphic Optimism - You shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.
  • How To Actually Change Your Mind
    • A Fable of Science and Politics - People respond in different ways to clear evidence they're wrong, not always by updating and moving on.
    • Policy Debates Should Not Appear One-Sided - Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.
    • The Scales of Justice, the Notebook of Rationality - People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.
    • The Affect Heuristic - Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.
    • The Halo Effect - Positive qualities seem to correlate with each other, whether or not they actually do.
    • Your Strength as a Rationalist - A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
    • Hindsight bias - Describes the tendency to seem much more likely in hindsight than could have been predicted beforehand.
    • Positive Bias: Look Into the Dark - The tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
    • Fake Optimization Criteria - Why study evolution? For one thing - it lets us see an alien optimization process up close - lets us see the real consequence of optimizing strictly for an alien optimization criterion like inclusive genetic fitness. Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to restrain their breeding to live in harmony with prey; the true result is something that humans find less aesthetic.
    • Belief in Belief - Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.
    • Belief in Self-Deception - Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.
    • The Third Alternative - on not skipping the step of looking for additional alternatives
    • Feeling Rational - Emotions cannot be true or false, but they can follow from true or false beliefs.
  • Hypocrisy
    • Self-deception: Hypocrisy or Akrasia? - It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites". Instead they are suffering from "akrasia" or weakness of will. At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.
  • Improper belief
    • Belief as Attire - When you've stopped anticipating-as-if something, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.
  • Magic
    • Universal Fire - You can't change just one thing in the world and expect the rest to continue working as before.

(the many-worlds interpretations wins outright given the current state of evidence)

  • No safe defense
    • No Safe Defense, Not Even Science - Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.
  • Outside view
    • Planning Fallacy - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
  • Philosophical zombie
    • Zombies! Zombies? - Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
  • Planning fallacy
    • Planning Fallacy - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
  • Politics is the Mind-Killer
    • A Fable of Science and Politics - People respond in different ways to clear evidence they're wrong, not always by updating and moving on.
    • Policy Debates Should Not Appear One-Sided - Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.
    • The Scales of Justice, the Notebook of Rationality - People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.
    • Correspondence Bias - Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
  • Possibility
    • Useless Medical Disclaimers - Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having there, maybe we ought to tax those people.
  • Prediction
    • Futuristic Predictions as Consumable Goods - The Friedman Unit is named after Thomas Friedman who 8 times (between 2003 and 2007) called "the next six months" the critical period in Iraq. This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.
  • Reality is normal
    • Think Like Reality - "Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."
  • Representativeness heuristic
    • Conjunction Fallacy - Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions, it requires certain general habits of thought.
  • Science
    • Science Doesn't Trust Your Rationality - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.
    • No Safe Defense, Not Even Science - Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.
  • Seeing with Fresh Eyes
    • Priming and Contamination - Even slight exposure to a stimulus is enough to change the outcome of a decision or estimate. See also Never Leave Your Room by Yvain, and Cached Selves by Salamon and Rayhawk.
    • Do We Believe Everything We're Told? - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
    • Original Seeing - One way to fight cached patterns of thought is to focus on precise concepts.
    • How to Seem (and Be) Deep - Just find ways of violating cached expectations.
    • We Change Our Minds Less Often Than We Think - We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.
    • Hold Off On Proposing Solutions - Proposing Solutions Prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.
    • Asch's Conformity Experiment - The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.
    • On Expressing Your Concerns - A way of breaking the conformity effect in some cases
  • Self-deception
    • Self-deception: Hypocrisy or Akrasia? - It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites". Instead they are suffering from "akrasia" or weakness of will. At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.
  • Separate magisteria
    • Outside the Laboratory - Outside the laboratory: those who understand the map/territory distinction will *integrate* their knowledge, as they see the evidence that reality is a single unified process.
  • Standard of evidence
    • The Dilemma: Science or Bayes? - The failure of first-half-of-20th-century-physics was not due to straying from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.
    • Science Doesn't Trust Your Rationality - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.
  • Statistical bias
    • "Statistical Bias" - There are two types of error, systematic error, and random variance error; by repeating experiments you can average out and drive down the variance error.
    • Useful Statistical Biases - If you know variance (noise) exists, you can intentionally introduce bias by ignoring some squiggles, choosing a simpler hypothesis, and thereby lowering expected variance while raising expected bias; sometimes total error is lower, hence the "mean-variance tradeoff" technique.
  • Stupidity of evolution
    • The Wonder of Evolution - ...is not how amazingly well it works, but that it works at all without a mind, brain, or the ability to think abstractly - that an entirely accidental process can produce complex designs. If you talk about how amazingly well evolution works, you're missing the point.
    • Evolutions Are Stupid (But Work Anyway) - Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
  • Superstimulus
    • Superstimuli and the Collapse of Western Civilization - As a side effect of evolution, super-stimuli exist, and as a result of economics, are getting and should continue to get worse.
    • Adaptation-Executers, not Fitness-Maximizers - A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.
  • Surprise
    • Think Like Reality - "Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."
    • Your Strength as a Rationalist - A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
  • Technical explanation
    • Making Beliefs Pay Rent (in Anticipated Experiences) - Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"
  • The Quantum Physics Sequence
    • Reductionism - We build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.
    • Zombies! Zombies? - Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
    • Belief in the Implied Invisible - That it's impossible even in principle to observe something sometimes isn't enough to conclude that it doesn't exist.
    • Quantum Explanations - Quantum mechanics doesn't deserve its fearsome reputation.
    • Configurations and Amplitude - A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.
    • Joint Configurations - The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an experimentally testable fact that "Photon 1 here, photon 2 there" is the same configuration as "Photon 2 here, photon 1 there".
    • Distinct Configurations - Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.
    • Where Philosophy Meets Science - In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?
    • Can You Prove Two Particles Are Identical? - You wouldn't think that it would be possible to do an experiment that told you that two particles are completely identical - not just to the limit of experimental precision, but perfectly. You could even give a precise-sounding philosophical argument for why it was not possible - but the argument would have a deeply buried assumption. Quantum physics violates this deep assumption, making the experiment easy.
    • Classical Configuration Spaces - How to visualize the state of a system of two 1-dimensional particles, as a single point in 2-dimensional space. A preliminary step before moving into...
    • The Quantum Arena - Instead of a system state being associated with a single point in a classical configuration space, the instantaneous real state of a quantum system is a complex amplitude distribution over a quantum configuration space. What creates the illusion of "individual particles", like an electron caught in a trap, is a plaid distribution - one that happens to factor into the product of two parts. It is the whole distribution that evolves when a quantum system evolves. Individual configurations don't have physics; amplitude distributions have physics. Quantum entanglement is the general case; quantum independence is the special case.
    • Feynman Paths - Instead of thinking that a photon takes a single straight path through space, we can regard it as taking all possible paths through space, and adding the amplitudes for every possible path. Nearly all the paths cancel out - unless we do clever quantum things, so that some paths add instead of canceling out. Then we can make light do funny tricks for us, like reflecting off a mirror in such a way that the angle of incidence doesn't equal the angle of reflection. But ordinarily, nearly all the paths except an extremely narrow band, cancel out - this is one of the keys to recovering the hallucination of classical physics.
    • No Individual Particles - One of the chief ways to confuse yourself while thinking about quantum mechanics, is to think as if photons were little billiard balls bouncing around. The appearance of little billiard balls is a special case of a deeper level on which there are only multiparticle configurations and amplitude flows. It is easy to set up physical situations in which there exists no fact of the matter as to which electron was originally which.
    • Decoherence - A quantum system that factorizes can evolve into a system that doesn't factorize, destroying the illusion of independence. But entangling a quantum system with its environment, can appear to destroy entanglements that are already present. Entanglement with the environment can separate out the pieces of an amplitude distribution, preventing them from interacting with each other. Decoherence is fundamentally symmetric in time, but appears asymmetric because of the second law of thermodynamics.
    • The So-Called Heisenberg Uncertainty Principle - Unlike classical physics, in quantum physics it is not possible to separate out a particle's "position" from its "momentum".
    • Which Basis Is More Fundamental? - The position basis can be computed locally in the configuration space; the momentum basis is not local. Why care about locality? Because it is a very deep principle; reality itself seems to favor it in some way.
    • Where Physics Meets Experience - Meet the Ebborians, who reproduce by fission. The Ebborian brain is like a thick sheet of paper that splits down its thickness. They frequently experience dividing into two minds, and can talk to their other selves. It seems that their unified theory of physics is almost finished, and can answer every question, when one Ebborian asks: When exactly does one Ebborian become two people?
    • Where Experience Confuses Physicists - It then turns out that the entire planet of Ebbore is splitting along a fourth-dimensional thickness, duplicating all the people within it. But why does the apparent chance of "ending up" in one of those worlds, equal the square of the fourth-dimensional thickness? Many mysterious answers are proposed to this question, and one non-mysterious one.
    • On Being Decoherent - When a sensor measures a particle whose amplitude distribution stretches over space - perhaps seeing if the particle is to the left or right of some dividing line - then the standard laws of quantum mechanics call for the sensor+particle system to evolve into a state of (particle left, sensor measures LEFT) + (particle right, sensor measures RIGHT). But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").
    • The Conscious Sorites Paradox - Decoherence is implicit in quantum physics, not an extra law on top of it. Asking exactly when "one world" splits into "two worlds" may be like asking when, if you keep removing grains of sand from a pile, it stops being a "heap". Even if you're inside the world, there may not be a definite answer. This puzzle does not arise only in quantum physics; the Ebborians could face it in a classical universe, or we could build sentient flat computers that split down their thickness. Is this really a physicist's problem?
    • Decoherence is Pointless - There is no exact point at which decoherence suddenly happens. All of quantum mechanics is continuous and differentiable, and decoherent processes are no exception to this.
    • Decoherent Essences - Decoherence is implicit within physics, not an extra law on top of it. You can choose representations that make decoherence harder to see, just like you can choose representations that make apples harder to see, but exactly the same physical process still goes on; the apple doesn't disappear and neither does decoherence. If you could make decoherence magically go away by choosing the right representation, we wouldn't need to shield quantum computers from the environment.
    • The Born Probabilities - The last serious mysterious question left in quantum physics: When a quantum world splits in two, why do we seem to have a greater probability of ending up in the larger blob, exactly proportional to the integral of the squared modulus? It's an open problem, but non-mysterious answers have been proposed. Try not to go funny in the head about it.
    • Decoherence as Projection - Since quantum evolution is linear and unitary, decoherence can be seen as projecting a wavefunction onto orthogonal subspaces. This can be neatly illustrated using polarized photons and the angle of the polarized sheet that will absorb or transmit them.
    • Entangled Photons - Using our newly acquired understanding of photon polarizations, we see how to construct a quantum state of two photons in which, when you measure one of them, the person in the same world as you, will always find that the opposite photon has opposite quantum state. This is not because any influence is transmitted; it is just decoherence that takes place in a very symmetrical way, as can readily be observed in our calculations.
    • Bell's Theorem: No EPR "Reality" - (Note: This post was designed to be read as a stand-alone, if desired.) Originally, the discoverers of quantum physics thought they had discovered an incomplete description of reality - that there was some deeper physical process they were missing, and this was why they couldn't predict exactly the results of quantum experiments. The math of Bell's Theorem is surprisingly simple, and we walk through it. Bell's Theorem rules out being able to locally predict a single, unique outcome of measurements - ruling out a way that Einstein, Podolsky, and Rosen once defined "reality". This shows how deep implicit philosophical assumptions can go. If worlds can split, so that there is no single unique outcome, then Bell's Theorem is no problem. Bell's Theorem does, however, rule out the idea that quantum physics describes our partial knowledge of a deeper physical state that could locally produce single outcomes - any such description will be inconsistent.
    • Spooky Action at a Distance: The No-Communication Theorem - As Einstein argued long ago, the quantum physics of his era - that is, the single-global-world interpretation of quantum physics, in which experiments have single unique random results - violates Special Relativity; it imposes a preferred space of simultaneity and requires a mysterious influence to be transmitted faster than light; which mysterious influence can never be used to transmit any useful information. Getting rid of the single global world dispels this mystery and puts everything back to normal again.
    • Quantum Non-Realism - "Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually is a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"
    • Collapse Postulates - Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.
    • If Many-Worlds Had Come First - If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.
    • Many Worlds, One Best Guess - Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds wins outright given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.
    • Living in Many Worlds - The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
    • Mach's Principle: Anti-Epiphenomenal Physics - Could you tell if the whole universe were shifted an inch to the left? Could you tell if the whole universe was traveling left at ten miles per hour? Could you tell if the whole universe was accelerating left at ten miles per hour? Could you tell if the whole universe was rotating?
    • Relative Configuration Space - Maybe the reason why we can't observe absolute speeds, absolute positions, absolute accelerations, or absolute rotations, is that particles don't have absolute positions - only positions relative to each other. That is, maybe quantum physics takes place in a relative configuration space.
    • Timeless Physics - What time is it? How do you know? The question "What time is it right now?" may make around as much sense as asking "Where is the universe?" Not only that, our physics equations may not need a t in them!
    • Timeless Beauty - To get rid of time you must reduce it to nontime. In timeless physics, everything that exists is perfectly global or perfectly local. The laws of physics are perfectly global; the configuration space is perfectly local. Every fundamentally existent ontological entity has a unique identity and a unique value. This beauty makes ugly theories much more visibly ugly; a collapse postulate becomes a visible scar on the perfection.
    • Timeless Causality - Using the modern, Bayesian formulation of causality, we can define causality without talking about time - define it purely in terms of relations. The river of time never flows, but it has a direction.
    • Timeless Identity - How can you be the same person tomorrow as today, in the river that never flows, when not a drop of water is shared between one time and another? Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left. With a surprising practical application...
    • Thou Art Physics - If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you are within physics. Anything you control is necessarily controlled by physics.
    • Timeless Control - We throw away "time" but retain causality, and with it, the concepts "control" and "decide". To talk of something as having been "always determined" is mixing up a timeless and a timeful conclusion, with paradoxical results. When you take a perspective outside time, you have to be careful not to

let your old, timeful intuitions run wild in the absence of their subject matter.

A short story set in the same world as <a href="/lw/p1/initiation_ceremony/">Initiation Ceremony</a>. Future physics students look back on the cautionary tale of quantum physics.

    • The Dilemma: Science or Bayes? - The failure of first-half-of-20th-century-physics was not due to straying from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.
    • Science Doesn't Trust Your Rationality - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.
    • When Science Can't Help - If you have an idea, Science tells you to test it experimentally. If you spend 10 years testing the idea and the result comes out negative, Science slaps you on the back and says, "Better luck next time." If you want to spend 10 years testing a hypothesis that will actually turn out to be right, you'll have to try to do the thing that Science doesn't trust you to do: think rationally, and figure out the answer before you get clubbed over the head with it.
    • Science Isn't Strict Enough - Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.
    • Do Scientists Already Know This Stuff? - No. Maybe someday it will be part of standard scientific training, but for now, it's not, and the absence is visible.
    • No Safe Defense, Not Even Science - Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.
    • Changing the Definition of Science - Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
    • Faster Than Science - Is it really possible to arrive at the truth faster than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to socially declare who was right, but if there weren't some people who could get it right in the absence of overwhelming experimental proof, science would be stuck.
    • Einstein's Speed - Albert was unusually good at finding the right theory in the presence of only a small amount of experimental evidence. Even more unusually, he admitted it - he claimed to know the theory was right, even in advance of the public proof. It's possible to arrive at the truth by thinking great high-minded thoughts of the sort that Science does not trust you to think, but it's a lot harder than arriving at the truth in the presence of overwhelming evidence.
    • That Alien Message - Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.
    • My Childhood Role Model - I looked up to the ideal of a Bayesian superintelligence, not Einstein.
    • Einstein's Superpowers - There's an unfortunate tendency to talk as if Einstein had superpowers - as if, even before Einstein was famous, he had an inherent disposition to be Einstein - a potential as rare as his fame and as magical as his deeds. Yet the way you acquire superpowers is not by being born with them, but by seeing, with a sudden shock, that they are perfectly normal.
    • Class Project - From the world of Initiation Ceremony. Brennan and the others are faced with their midterm exams.
    • Why Quantum? - Why do a series on quantum mechanics? Some of the many morals that are best illustrated by the tale of quantum mechanics and its misinterpretation.

A short story set in the same world as <a href="/lw/p1/initiation_ceremony/">Initiation Ceremony</a>. Future physics students look back on the cautionary tale of quantum physics.

  • Zombies (sequence)
    • Zombies! Zombies? - Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.
    • Belief in the Implied Invisible - That it's impossible even in principle to observe something sometimes isn't enough to conclude that it doesn't exist.


The following concepts don't have wikipages with links to LessWrong.com articles yet:


The following concepts are not in the All Articles pages:


The following concepts are in the All Articles page, but are redirects:


The following articles in the All Articles index are missing an entry:

The following article links need to be added to the concept pages:


The following See Also links only go one way:

The following is a list of all concept pages:


Links to the All Articles pages:


Links to the Summaries pages: