Difference between revisions of "Absurdity heuristic"

From Lesswrongwiki
Jump to: navigation, search
m (added post links)
(rewrote the summary)
Line 1: Line 1:
{{stub}}
+
The '''absurdity heuristic''' classifies highly untypical situations as "absurd", or [[antiprediction|impossible]]. While normally very useful, allowing to detect nonsense, it suffers from the same problems as [[representativeness heuristic]].
The Absurdity heuristic may be described and the converse of tethe less X resembles Y, or the more X violates typicality assumptions of Y, the less probable that X is the product, explanation, or outcome of Y. A sequence of events is less probable when it involves an egg unscrambling itself, water flowing upward, machines thinking or dead people coming back to life.  People may also be more sensitive to "absurdity" that invalidates a plan or indicates cheating.  Consider the difference between "I saw a little blue man yesterday, walking down the street" versus "I'm going to jump off this cliff and a little blue man will catch me on the way down" or "If you give me your wallet, a little blue man will bring you a pot of gold."
+
 
 +
There is a number of situations where absurdity heuristic is wrong. A deep theory has to [[shut up and multiply|override the intuitive expectation]]. Where you don't expect intuition to construct an [[technical explanation|adequate model]] of reality, classifying an idea as impossible may be [[overconfidence|overconfident]]. [http://lesswrong.com/lw/j1/stranger_than_history/ The future is usually "absurd"], although sometimes it's possible to [[exploratory engineering|rigorously infer low bounds on capabilities of the future]], proving possible what is intuitively absurd.
  
 
==See also==
 
==See also==
Line 6: Line 7:
 
*[[Representativeness heuristic]]
 
*[[Representativeness heuristic]]
 
*[[Generalization from fictional evidence]]
 
*[[Generalization from fictional evidence]]
 +
*[[Antiprediction]]
 +
*[[Exploratory engineering]]
  
 
==Main post==
 
==Main post==
Line 16: Line 19:
 
*[http://lesswrong.com/lw/j6/why_is_the_future_so_absurd/ Why is the Future So Absurd?] by [[Eliezer Yudkowsky]]
 
*[http://lesswrong.com/lw/j6/why_is_the_future_so_absurd/ Why is the Future So Absurd?] by [[Eliezer Yudkowsky]]
  
 +
{{stub}}
 
[[Category:Biases]]
 
[[Category:Biases]]

Revision as of 22:50, 15 July 2009

The absurdity heuristic classifies highly untypical situations as "absurd", or impossible. While normally very useful, allowing to detect nonsense, it suffers from the same problems as representativeness heuristic.

There is a number of situations where absurdity heuristic is wrong. A deep theory has to override the intuitive expectation. Where you don't expect intuition to construct an adequate model of reality, classifying an idea as impossible may be overconfident. The future is usually "absurd", although sometimes it's possible to rigorously infer low bounds on capabilities of the future, proving possible what is intuitively absurd.

See also

Main post

Other posts