# Difference between revisions of "Newcomb's problem"

m (added links to Arbital) |
|||

Line 1: | Line 1: | ||

− | {{wikilink|Newcomb's paradox}} | + | {{arbitallink|https://arbital.com/explore/5pt/|Newcombelike decision problems}}{{wikilink|Newcomb's paradox}} |

In '''Newcomb's problem''', a superintelligence called [[Omega]] shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. Omega has put $1,000 in box B. If Omega thinks you will take box A only, he has put $1,000,000 in it. Otherwise he has left it empty. Omega has played this game many times, and has never been wrong in his predictions about whether someone will take both boxes or not. | In '''Newcomb's problem''', a superintelligence called [[Omega]] shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. Omega has put $1,000 in box B. If Omega thinks you will take box A only, he has put $1,000,000 in it. Otherwise he has left it empty. Omega has played this game many times, and has never been wrong in his predictions about whether someone will take both boxes or not. | ||

## Latest revision as of 04:20, 14 October 2016

In **Newcomb's problem**, a superintelligence called Omega shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. Omega has put $1,000 in box B. If Omega thinks you will take box A only, he has put $1,000,000 in it. Otherwise he has left it empty. Omega has played this game many times, and has never been wrong in his predictions about whether someone will take both boxes or not.

Terms used in relation to this paradox:

**Omega**, the superintelligence who decides whether to put the million in box A.**one-box**: to take only box A**two-box**: to take both boxes

A succinct introduction to analysis of the paradox, paraphrased from Gary Drescher's *Good and Real*:

What makes this a "paradox" is that it brings into sharp conflict two distinct intuitions we have about decision-making, which rarely bear on the same situation but clash in the case of Newcomb's. The first intution is

considering rational expectations, act so as to bring about desired outcomes. This suggests one-boxing: we expect, based on the evidence, that we will find box A empty if we two-box. The second intuition isonly act if your action will alter the outcome. This suggests two-boxing: our decision to take one box or both cannot alter the outcome, which robs the first intuition of its power to suggest one-boxing; since two-boxing has strictly greater expected utility, we choose that.

## Irrelevance of Omega's physical impossibility

Sometimes people dismiss Newcomb's problem because a being such as Omega is physically impossible. Actually, the possibility or impossibility of Omega is irrelevant. Consider a skilled human psychologist that can predict other humans' actions with, say, 65% accuracy. Now imagine they start running Newcomb trials with themselves as Omega.

## Blog posts

- Newcomb's Problem and Regret of Rationality
- Formalizing Newcomb's by cousin_it
- Newcomb's Problem standard positions
- Newcomb's Problem vs. One-Shot Prisoner's Dilemma by Wei Dai
- Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives by Anna Salamon
- All Less Wrong posts tagged "newcomb"