Acausal trade

From Lesswrongwiki
Revision as of 06:30, 13 March 2012 by JoshuaFox (talk | contribs)
Jump to: navigation, search

We have agents A and B, possibly space/time separated so that no interaction is possible. They might be in separate Everett Branches; or they might be outside each other's light cones, e.g., they might be very distant in an expanding universe or B might be a billion years in A's future on Earth. A and B each can do something the other wants, and value that thing less than the other does. This is the usual condition for trade.

However, A and B cannot count on the usual enforcement mechanisms to ensure cooperation, e.g., an expectation of future interactions or an outside enforcer.

A and B will cooperate because each knows that the other can somehow predict its behavior very well, like Newcomb's Omega.

Each knows that if it defects (respectively: cooperates), the other will know this and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.

This scenario does not specify how A and B can predict each other.

They might have each other's source code.

They might have identical or similar mental architectures, so that the each one knows that its own mental processes approximately simulate the other's. See Drescher's acausal subjunctive cooperation.

Or they might be able to simulate each other or predict the other's behavior analytically, perhaps approximately and probabilistically to avoid the problem of simulating something of the same complexity of oneself in full.