Difference between revisions of "Pascal's mugging"

From Lesswrongwiki
Jump to: navigation, search
(added Hanson's resolution)
(added clarification of de Blanc)
Line 5: Line 5:
 
|[http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ Pascal's Mugging: Tiny Probabilities of Vast Utilities]}}
 
|[http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ Pascal's Mugging: Tiny Probabilities of Vast Utilities]}}
  
If an agent's utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes.  As pointed out by Peter de Blanc, its expected utilities may not even converge.  The prior over computable universes in [[Solomonoff induction]] seems to have this problem in particular - more generally, if prior probability goes as simplicity of physical law, then small increases in complexity can correspond to enormous increases in the size of even a finite universe.
+
If an agent's utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes.  The prior over computable universes in [[Solomonoff induction]] seems to have this problem in particular - more generally, if prior probability goes as simplicity of physical law, then small increases in complexity can correspond to enormous increases in the size of even a finite universe.
  
 
Intuitively, one is not inclined to acquiesce to the mugger's demands - or even pay all that much attention one way or another - but what kind of prior does this imply?
 
Intuitively, one is not inclined to acquiesce to the mugger's demands - or even pay all that much attention one way or another - but what kind of prior does this imply?
  
Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a *surprisingly unique* position to affect large numbers of other people who cannot symmetrically affect us.  Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who are not symmetrically in such a situation themselves, the prior probability would be penalized by a factor on the same order as the utility.
+
Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a ''surprisingly unique'' position to affect large numbers of other people who cannot symmetrically affect us.  Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who are not symmetrically in such a situation themselves, the prior probability would be penalized by a factor on the same order as the utility.
 +
 
 +
Peter de Blanc has proven that if an agent assigns a finite probability to all computable hypotheses (which probability may be computable, or computably bounded below) and assigns unboundedly large finite utilities over ''percept sequences'' (that is, the utilities are a direct function of sensory information, not over possible external universes that could be responsible for the sensory percepts; and these utilities are computable, or computably bounded below) then the sum in the expected utility formula does not converge.
 +
 
 +
Peter de Blanc's paper, and the Pascal's Mugging argument, are sometimes misinterpreted as showing that any agent with an ''unbounded finite utility function'' over outcomes is not consistent, but this has yet to be demonstrated.
  
 
==Blog posts==
 
==Blog posts==

Revision as of 03:22, 4 November 2009

Pascal's mugging refers to a thought experiment in decision theory, a finite analogue of Pascal's wager. The situation is dramatized by a mugger:

Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."

If an agent's utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes. The prior over computable universes in Solomonoff induction seems to have this problem in particular - more generally, if prior probability goes as simplicity of physical law, then small increases in complexity can correspond to enormous increases in the size of even a finite universe.

Intuitively, one is not inclined to acquiesce to the mugger's demands - or even pay all that much attention one way or another - but what kind of prior does this imply?

Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who are not symmetrically in such a situation themselves, the prior probability would be penalized by a factor on the same order as the utility.

Peter de Blanc has proven that if an agent assigns a finite probability to all computable hypotheses (which probability may be computable, or computably bounded below) and assigns unboundedly large finite utilities over percept sequences (that is, the utilities are a direct function of sensory information, not over possible external universes that could be responsible for the sensory percepts; and these utilities are computable, or computably bounded below) then the sum in the expected utility formula does not converge.

Peter de Blanc's paper, and the Pascal's Mugging argument, are sometimes misinterpreted as showing that any agent with an unbounded finite utility function over outcomes is not consistent, but this has yet to be demonstrated.

Blog posts

See also

References

  • Nick Bostrom (2009). "Pascal's Mugging". Analysis 69 (3): 443-445.  (PDF)