Difference between revisions of "Aumann's agreement theorem"

From Lesswrongwiki
Jump to: navigation, search
m (fixing broken link)
(there is the theorem, and then there are informal inferences, more like rationalization than conclusions; one should keep track of the distinction)
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
'''Aumann's agreement theorem''' states that Bayesian reasoners with [[common priors]] and [[common knowledge]] of each other's opinions ''cannot'' agree to disagree. Intuitively: if ''I'm'' an honest seeker of truth, and ''you're'' an honest seeker of truth, and we ''believe'' each other to be honest, then we can update on each other's opinions and quickly reach agreement. Unless you think I'm so irredeemably irrational that my opinions ''anti''correlate with truth, then the very fact that I believe something is Bayesian evidence that that something is true, and you should take that into account when forming your belief. Likewise, fellow rationalists should update their beliefs on your beliefs, ''not'' as a social custom or personal courtesy, but simply because your rational belief really ''is'' Bayesian evidence about the state of the world, in the same way that a photograph or a reference book is evidence about the state of the world. The fact that disagreements on questions of simple fact are so common amongst humans, and that people seem to think this is normal, is an observation that should [[No safe defense|strike fear into the heart]] of every aspiring rationalist.
+
 
 +
'''Aumann's agreement theorem''', roughly speaking, says that two agents acting rationally (in a certain precise sense) and with [[common knowledge]] of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine [[Bayesian]]s, share common [[priors]], and have common knowledge of each other's current probability assignments, then they must have equal probability assignments.
 +
 
 +
Outside of well-functioning [[prediction market]]s, Aumann's conditions can probably only be reached by careful [[deliberation|deliberative discourse]]: whether deliberation tends to resolve disagreements in practice is an empirical question.
  
 
==See also==
 
==See also==

Revision as of 03:36, 8 September 2009

Smallwikipedialogo.png
Wikipedia has an article about


Aumann's agreement theorem, roughly speaking, says that two agents acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesians, share common priors, and have common knowledge of each other's current probability assignments, then they must have equal probability assignments.

Outside of well-functioning prediction markets, Aumann's conditions can probably only be reached by careful deliberative discourse: whether deliberation tends to resolve disagreements in practice is an empirical question.

See also

Blog posts

References

  • Robert J. Aumann (1976). "Agreeing to Disagree". The Annals of Statistics 4 (6): 1236-1239. ISSN 00905364.  (PDF)
  • Tyler Cowen and Robin Hanson (2004). Are Disagreements Honest?.  (PDF, Talk video)