Difference between revisions of "Newcomb's problem"

From Lesswrongwiki
Jump to: navigation, search
(Irrelevance of Omega's physical impossibility)
Line 20: Line 20:
  
 
*[http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ Newcomb's Problem and Regret of Rationality]
 
*[http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ Newcomb's Problem and Regret of Rationality]
*[http://lesswrong.com/lw/7v/formalizing_newcombs/ Formalizing Newcomb's] by [[cousin_it]]
+
*[http://lesswrong.com/lw/7v/formalizing_newcombs/ Formalizing Newcomb's] by [http://lesswrong.com/user/cousin_it/ cousin_it]
 
*[http://lesswrong.com/lw/90/newcombs_problem_standard_positions/ Newcomb's Problem standard positions]
 
*[http://lesswrong.com/lw/90/newcombs_problem_standard_positions/ Newcomb's Problem standard positions]
*[http://lesswrong.com/lw/6r/newcombs_problem_vs_oneshot_prisoners_dilemma/ Newcomb's Problem vs. One-Shot Prisoner's Dilemma] by [[Wei Dai]]
+
*[http://lesswrong.com/lw/6r/newcombs_problem_vs_oneshot_prisoners_dilemma/ Newcomb's Problem vs. One-Shot Prisoner's Dilemma] by [http://weidai.com/ Wei Dai]
 
*[http://lesswrong.com/lw/17b/decision_theory_why_pearl_helps_reduce_could_and/ Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives] by [[Anna Salamon]]
 
*[http://lesswrong.com/lw/17b/decision_theory_why_pearl_helps_reduce_could_and/ Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives] by [[Anna Salamon]]
 
*{{Lesswrongtag|newcomb}}
 
*{{Lesswrongtag|newcomb}}

Revision as of 15:58, 29 February 2012

Smallwikipedialogo.png
Wikipedia has an article about

In Newcomb's problem, a superintelligence called Omega shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. Omega has put $1,000 in box B. If Omega thinks you will take box A only, he has put $1,000,000 in it. Otherwise he has left it empty. Omega has played this game many times, and has never been wrong in his predictions about whether someone will take both boxes or not.

Terms used in relation to this paradox:

  • Omega, the superintelligence who decides whether to put the million in box A.
  • one-box: to take only box A
  • two-box: to take both boxes


A succinct introduction to analysis of the paradox, paraphrased from Gary Drescher's Good and Real:

What makes this a "paradox" is that it brings into sharp conflict two distinct intuitions we have about decision-making, which rarely bear on the same situation but clash in the case of Newcomb's. The first intution is considering rational expectations, act so as to bring about desired outcomes. This suggests one-boxing: we expect, based on the evidence, that we will find box A empty if we two-box. The second intuition is only act if your action will alter the outcome. This suggests two-boxing: our decision to take one box or both cannot alter the outcome, which robs the first intuition of its power to suggest one-boxing; since two-boxing has strictly greater expected utility, we choose that.

Irrelevance of Omega's physical impossibility

Sometimes people dismiss Newcomb's problem because a being such as Omega is physically impossible. Actually, the possibility or impossibility of Omega is irrelevant. Consider a skilled human psychologist that can predict other humans' actions with, say, 65% accuracy. Now imagine they start running Newcomb trials with themselves as Omega.

Blog posts

See also