Difference between revisions of "Absurdity heuristic"

From Lesswrongwiki
Jump to: navigation, search
m (grammar fix: singular "there is" to plural "there are" preceeding "a number of")
 
(14 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{stub}}
+
The '''absurdity heuristic''' classifies highly untypical situations as "absurd", or [[antiprediction|impossible]]. While normally very useful as a form of [[epistemic hygiene]], allowing us to detect nonsense, it suffers from the same problems as the [[representativeness heuristic]].
The Absurdity heuristic may be described and the converse of tethe less X resembles Y, or the more X violates typicality assumptions of Y, the less probable that X is the product, explanation, or outcome of Y. A sequence of events is less probable when it involves an egg unscrambling itself, water flowing upward, machines thinking or dead people coming back to life.  People may also be more sensitive to "absurdity" that invalidates a plan or indicates cheating.  Consider the difference between "I saw a little blue man yesterday, walking down the street" versus "I'm going to jump off this cliff and a little blue man will catch me on the way down" or "If you give me your wallet, a little blue man will bring you a pot of gold."
+
 
 +
There are a number of situations in which the absurdity heuristic is wrong. A deep theory has to [[shut up and multiply|override the intuitive expectation]]. Where you don't expect intuition to construct an [[technical explanation|adequate model]] of reality, classifying an idea as impossible may be [[overconfidence|overconfident]]. [http://lesswrong.com/lw/j1/stranger_than_history/ The future is usually "absurd"], although sometimes it's possible to [[exploratory engineering|rigorously infer low bounds on capabilities of the future]], proving possible what is intuitively absurd.
 +
 
 +
==Blog posts==
 +
 
 +
*[http://lesswrong.com/lw/j4/absurdity_heuristic_absurdity_bias/ Absurdity Heuristic, Absurdity Bias]
 +
*[http://lesswrong.com/lw/j1/stranger_than_history/ Stranger Than History]
 +
*[http://lesswrong.com/lw/j6/why_is_the_future_so_absurd/ Why is the Future So Absurd?]
 +
*[http://www.overcomingbias.com/2008/04/arbitrary-silli.html Arbitrary Silliness] by [[Robin Hanson]]
 +
*[http://lesswrong.com/lw/2d/talking_snakes_a_cautionary_tale/ Talking Snakes: A Cautionary Tale] by [[Yvain]]
 +
*[http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/ The Correct Contrarian Cluster]
  
 
==See also==
 
==See also==
  
 
*[[Representativeness heuristic]]
 
*[[Representativeness heuristic]]
*[[Generalization from fictional evidence]]
+
*[[Shut up and multiply]]
 
+
*[[Antiprediction]]
==Main post==
+
*[[Epistemic hygiene]]
 
+
*[[Exploratory engineering]]
*[http://lesswrong.com/lw/j4/absurdity_heuristic_absurdity_bias/ Absurdity Heuristic, Absurdity Bias] by [[Eliezer Yudkowsky]]
+
*[[Illusion of transparency]]
 +
*[[Status quo bias]], [[Reversal test]]
  
 
[[Category:Biases]]
 
[[Category:Biases]]
 +
[[Category:Future]]

Latest revision as of 04:44, 7 July 2016

The absurdity heuristic classifies highly untypical situations as "absurd", or impossible. While normally very useful as a form of epistemic hygiene, allowing us to detect nonsense, it suffers from the same problems as the representativeness heuristic.

There are a number of situations in which the absurdity heuristic is wrong. A deep theory has to override the intuitive expectation. Where you don't expect intuition to construct an adequate model of reality, classifying an idea as impossible may be overconfident. The future is usually "absurd", although sometimes it's possible to rigorously infer low bounds on capabilities of the future, proving possible what is intuitively absurd.

Blog posts

See also