In acausal trade, agents P and Q cooperate as follows: P simulates or otherwise analyzes agent Q, and learns that Q provably does something that P wants, if P does what Q wants; and Q symmetrically learns the same about P.
We have agents P and Q, possibly separated in space-time so that no interaction is possible. They might be in separate Everett Branches; or they might be outside each other's light cones, e.g., they might be very distant in an expanding universe; or Q might be a billion years in A's future on Earth. Or on the other hand they might be able to communicate, as in ordinary trade.
P and Q each can do something the other wants, and value that thing less than the other does. This is the usual condition for trade. This can happen even if P and Q are completely separated. One example: An FAI wants the well-being of all humans, even those it cannot be in causal contact with, because they are in the past.
In acausal trade, P and Q cannot count on the usual enforcement mechanisms to ensure cooperation-- there is no expectation of future interactions or an outside enforcer.
P and Q will cooperate because each knows that the other can somehow predict its behavior very well, like Newcomb's Omega.
Each knows that if it defects (respectively: cooperates), the other will know this, and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.
This can also be described as P and Q provably precommitting to cooperate.
This scenario does not specify how P and Q can predict each other.
- They might have each other's source code.
- They might be able to simulate each other, or to predict the other's behavior analytically. The simulation may be approximate and and predictions probabilistic to avoid the problem of simulating something of the same complexity of oneself in full.
- In particular, P and Q might have identical or similar mental architect ures, so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's acausal subjunctive cooperation.
An example of acausal trade with simple resource requirements
At its most abstract, the agents in this model are simply optimization algorithms, and agents can run each other as subalgorithm if that is useful in improving optimization power. Let T be an algorithm for which time is most valuable in optimizing its utility function; while for S, space is most valuable.
T and S must both deal with subjective uncertainty as to whether they will be instantiated in a universe where time is in abundance (a universe which will not end for a long time), or a space-rich universe. (This is sometimes expressed in terms of multiverse theory, but only requires that we specify subjective uncertainly, i.e., that the algorithms be able to optimize in either kind of universe, not knowing which they are in.)
T can prove about S that it will "do a favor" for T, i.e., that S will run T as a subalgorithm if it finds itself instantiated in a time-rich universe. S can prove that T will "reciprocate" -- i.e., T will run S in a space-rich universe, in acausal trade for S's proven commitment.
T and S also estimate that the other is likely to exist. This is because super-powerful intelligences (optimizers) are an attractor in any universe that has a general intelligence at all, since any general intelligence will want to self-improve in order to better optimize towards its goals. Given that superintelligences exist in other possible universes, T and S will realize that one efficient technique in a super-optimizer is acausal trade, thus implying the existence of the superintelligent acausal trading partner.
Algorithms T and S are so structured because, as we have described, acausal trade is a useful technique in certain conditions for optimizing towards each agent's own goals: Acausal trade is one of the tricks a good optimizer can use to optimize its utility function.
Acausal trade with complex resource requirements
The example above, resource requirements are very simple: Time and Space. It may be that in the limit of increasing intelligence, time and space become the only relevant resources. But But for an agent with goals of human-level complexity and arbitrariness, with complex resource requirements, any potential trading partner will not consider these likely enough to do any favors directly for the agent.
However, in a multiverse, all agents and all trading partners exist in many near-identical instances. If an agent decides to randomly sample a tiny fraction of all possible types of acausal trading partner, and its almost-identical counterparts elsewhere in the multiverse do likewise, then in aggregate they will sample all of the major types of acausal trading partners, in rough proportion to their frequency, if the models are good. Then all these near-identical instances of the agent produce stuff the sampled trading partners would want, if they are running a symmetric algorithm that would produce stuff that the agents want (if they sampled them in return). The computational effort is shared out across the ensemble, but this still makes goods given and received come into rough balance as long as one samples representatively.
Thus, each agent is producing stuff only for a tiny fraction of potential trading partners, and cannot expect to trade one-on-one with any given trading partner. But because there are a huge number of near-copies of that agent, each trading partner has a good chance that someone will do what it wants.
Relevance to Friendly AI
Acausal trade may be a way to get the cooperation of a future AI. If we know that the AI will want us to behave a certain way, and we can prove that it will be Friendlier, once it arises, if we do what it wants now, and that it can symmetrically prove that we do what it wants, if we've proven this behavior about it; then we can trade with it even though it does not yet exist.
This approach rests on being able to prove certain facts about human behavior; or to put it another way, for humans to be able to provably commit to behavior.