Difference between revisions of "Acausal trade"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
We have agents A and B, possibly space/time separated so that no interaction is possible. They might be in separate Everett Branches; or they might be outside each other's light cones, e.g., they might be very distant in an expanding universe or B might be a billion years in A's future on Earth. Or on the other hand they might be able to communicate, as in ordinary trade.
+
In acausal trade, agents P and Q cooperatre as follows: P simulates or otherwise analyzes agent Q and learns that Q does  something that P wants, and also learns that the symmetrical statement is true: Q can simulate or analyze P well enough to know that P likewise does something that Q wants.  
  
A and B each can do something the other wants, and value that thing less than the other does. This is the usual condition for trade. This can happen even if  A and B are completely separated. One example: A FAI wants the well-being of ''all'' humans, even those it cannot be in causal contact with.
+
== Discussion==
 +
We have agents P and Q, possibly space/time separated so that no interaction is possible. They might be in separate Everett Branches; or they might be outside each other's light cones, e.g., they might be very distant in an expanding universe, or Q might be a billion years in A's future on Earth. Or on the other hand they might be able to communicate, as in ordinary trade.
  
In ''acausal'' trade, A and B cannot count on the usual enforcement mechanisms to ensure cooperation, e.g., an expectation of future interactions or an outside enforcer.
+
P and Q each can do something the other wants, and value that thing less than the other does. This is the usual condition for trade. This can happen even if  P and Q are completely separated. One example: An FAI wants the well-being of ''all'' humans, even those it cannot be in causal contact with.
  
A and B will cooperate because each knows that the other can somehow predict its behavior very well, like Newcomb's Omega.  
+
In ''acausal'' trade, P and Q cannot count on the usual enforcement mechanisms to ensure cooperation, e.g., an expectation of future interactions or an outside enforcer.
 +
 
 +
P and Q will cooperate because each knows that the other can somehow predict its behavior very well, like Newcomb's Omega.  
  
 
Each knows that if it defects (respectively: cooperates), the other will know this and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.
 
Each knows that if it defects (respectively: cooperates), the other will know this and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.
  
This scenario does not specify how A and B can predict each other.
+
This scenario does not specify how P and Q can predict each other.
  
 
1. They might have each other's source code.  
 
1. They might have each other's source code.  
  
2. Or they might be able to simulate each other, or to predict the other's behavior analytically, perhaps approximately and probabilistically to avoid the problem of simulating something of the same complexity of oneself in full.
+
2. Or they might be able to simulate each other, or to predict the other's behavior analytically. The simulation may be approximate and make probabilistic predictions to avoid the problem of simulating something of the same complexity of oneself in full.
 +
 
 +
3. In particular, P and Q might have  identical or similar mental architectures, so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's acausal subjunctive cooperation.
 +
 
 +
Alexander Kruel's [http://kruel.co/2012/07/27/the-acausal-trade-argument/ "The Acausal Trade Argument"] summarizes some of the ideas.
 +
 
 +
 
 +
== Relevant to Friendly AI ==
 +
Acausal trade may be a way to get the cooperation of a future AI. If we know that the AI will want us to behave a certain way, and we can prove that it will be Friendlier, once it arises, if we do what it wants now; and it can symmetrically prove that we do what it  wants, if we've proven this behavior about it, then we can trade with it even though it does not yet exist.  
  
3. In particular, they might have identical or similar mental architectures, so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's acausal subjunctive cooperation.
+
This approach rests on being able to prove certain facts about human behavior; or similarly, for humans to be able to provably commit to behavior.

Revision as of 06:23, 31 October 2012

In acausal trade, agents P and Q cooperatre as follows: P simulates or otherwise analyzes agent Q and learns that Q does something that P wants, and also learns that the symmetrical statement is true: Q can simulate or analyze P well enough to know that P likewise does something that Q wants.

Discussion

We have agents P and Q, possibly space/time separated so that no interaction is possible. They might be in separate Everett Branches; or they might be outside each other's light cones, e.g., they might be very distant in an expanding universe, or Q might be a billion years in A's future on Earth. Or on the other hand they might be able to communicate, as in ordinary trade.

P and Q each can do something the other wants, and value that thing less than the other does. This is the usual condition for trade. This can happen even if P and Q are completely separated. One example: An FAI wants the well-being of all humans, even those it cannot be in causal contact with.

In acausal trade, P and Q cannot count on the usual enforcement mechanisms to ensure cooperation, e.g., an expectation of future interactions or an outside enforcer.

P and Q will cooperate because each knows that the other can somehow predict its behavior very well, like Newcomb's Omega.

Each knows that if it defects (respectively: cooperates), the other will know this and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.

This scenario does not specify how P and Q can predict each other.

1. They might have each other's source code.

2. Or they might be able to simulate each other, or to predict the other's behavior analytically. The simulation may be approximate and make probabilistic predictions to avoid the problem of simulating something of the same complexity of oneself in full.

3. In particular, P and Q might have identical or similar mental architectures, so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's acausal subjunctive cooperation.

Alexander Kruel's "The Acausal Trade Argument" summarizes some of the ideas.


Relevant to Friendly AI

Acausal trade may be a way to get the cooperation of a future AI. If we know that the AI will want us to behave a certain way, and we can prove that it will be Friendlier, once it arises, if we do what it wants now; and it can symmetrically prove that we do what it wants, if we've proven this behavior about it, then we can trade with it even though it does not yet exist.

This approach rests on being able to prove certain facts about human behavior; or similarly, for humans to be able to provably commit to behavior.