Difference between revisions of "Acausal trade"

From Lesswrongwiki
Jump to: navigation, search
(References: I'd explain the link a bit differently)
m (Vladimir Nesov moved page Acausal Trade to Acausal trade: standard naming convention)
(No difference)

Revision as of 00:44, 27 December 2012

In acausal trade, agents P and Q cooperate as follows: P simulates or otherwise analyzes agent Q, and learns that Q provably does something that P wants, if P does what Q wants; and Q symmetrically learns the same about P.

Description

We have agents P and Q, possibly separated in space-time so that no interaction is possible. They might be in separate Everett Branches; or they might be outside each other's light cones, e.g., they might be very distant in an expanding universe; or Q might be a billion years in P's future on Earth. Alternatively, they might be able to communicate, as in ordinary trade.

P and Q each can do something the other wants, and value that thing less than the other does. This is the usual condition for trade. However, acausal trade can happen even if P and Q are completely separated and have no opportunity to affect each other causally. One example: An FAI wants the well-being of all humans, even those it cannot be in causal contact with, because they are in the past.

In acausal trade, P and Q cannot count on the usual enforcement mechanisms to ensure cooperation -- there is no expectation of future interactions or an outside enforcer.

P and Q cooperate because each knows that the other can somehow predict its behavior very well, like Omega in Newcomb's problem.

Each knows that if it defects (respectively: cooperates), the other will know this, and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.

This can also be described as P and Q provably precommitting to cooperate.

Prediction mechanisms

This scenario does not specify how P and Q can predict each other.

  1. They might have each other's source code.
  2. They might be able to simulate each other, or to predict the other's behavior analytically. The simulation may be approximate, and the predictions probabilistic, to avoid tractability problems, and in particular the problem of fully simulating something of the same complexity of oneself.
  3. In particular, P and Q might have identical or similar mental architectures, so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
  4. But acausal trade may be possible for two agents that know nothing at all about each other other than that the other is an extremely powerful superintelligence, in a form of mutual reflective convergence. An optimal superintelligence may come to the conclusion that since reliable acausal trade is supported by the best possible optimization architecture, any other optimal superintelligences must also support it.

An example of acausal trade with simple resource requirements

At its most abstract, the agents in this model are simply optimization algorithms, and agents can run each other as subalgorithms if that is useful in improving optimization power. Let T be an algorithm for which time is most valuable in optimizing its utility function; while for S, space is most valuable.

T and S must both deal with subjective uncertainty as to whether they will be instantiated in a universe where time is in abundance (a universe which will not end for a long time), or a space-rich universe (an extremely large universe). (This uncertainly is sometimes expressed in terms of multiverse theory, but only requires that we specify subjective uncertainly, i.e., that the algorithms should be able to optimize in either kind of universe, not knowing in advance which they are in.)

T can prove about S that it will "do a favor" for T, i.e., that S will run T as a subalgorithm, if S finds itself instantiated in a time-rich universe. S can prove that T will "reciprocate" -- i.e., T will run S if it finds itself in a space-rich universe, in acausal trade for S's proven (possibly probabilistically proven) commitment.

T and S also estimate that the other is likely to exist. This is because super-powerful intelligences (optimizers) are an attractor in any universe that has a general intelligence at all, since any general intelligence will want to self-improve in order to better optimize towards its goals. Given that superintelligences exist in other possible universes, T and S will realize that one efficient technique in a super-optimizer is acausal trade, thus implying the existence of the superintelligent acausal trading partner.

Algorithms T and S are so structured because, as we have described, acausal trade is a useful technique in certain conditions for optimizing towards each agent's own goals: Acausal trade is one of the tricks a good optimizer can use to optimize its utility function.

Acausal trade with complex resource requirements

In the example above, resource requirements are very simple: Time and space. It may be that in the limit of increasing intelligence, time and space are the only relevant resources. But for an agent with complex and arbitrary goals, like humans, any potential trading partner will not consider the possibility of acausal trade to be likely enough that the partner would to do any acausal favors directly for the agent.

However, in a multiverse, all agents and all trading partners exist in many near-identical instances. If an agent P decides to randomly sample a tiny fraction of all possible types of acausal trading partner Q, and the almost-identical counterparts of P elsewhere in the multiverse do likewise, then in aggregate the near-copies of P will sample all of the major types of acausal trading partners Q, in rough proportion to their frequency, if the models are good. Then all these near-identical instances of the agent P produce stuff the sampled trading partners Q would want, if the potential partners Q are running a symmetric algorithm that would produce stuff that the agents want (if they sampled them in return). The computational effort is shared out across the ensemble of P, but this still makes the goods which given and received come into rough balance, as long as one samples representatively.

Thus, each agent P is producing stuff only for a tiny fraction of potential trading partners Q, and cannot expect to trade one-on-one with any given trading partner. But because there are a huge number of near-copies of that agent P, each trading partner Q has a good chance that one of the ensemble P will do what Q wants.

Related Concept: Super-rationality

The question of why to cooperate on a one-shot Prisoner's Dilemma has long been debated. On the one hand, defection always produces a better individual result that cooperation. On the other hand, this leads to an equilibrium of defect-defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will do better than the defect-defect equilibrium

Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. Gary Drescher (see references) developed the concept, introducing the term "acausal subjunctive morality" for an ethical system of behavior based on this mechanism. Eliezer Yudkowsky has discussed this one-shot Prisoner's Dilemma at The True Prisoner's Dilemma.

Acausal trade takes this form of cooperation several step further: Agent P does not simply cooperate with agent Q, doing what Q wants and knowing that Q will do what P wants. Agent P may also have to surmise the potential existence of Q, and calculate probability estimates about what Q would want P to do and what Q will do for P.

Relevance to Friendly AI

Acausal trade may be a way to get the cooperation of a future AI. If we know that the AI would want us to behave a certain way, and we can prove that it will do good things for us, once it arises, if we do what it wants now, and that it can prove the symmetrical statement -- that we do what it wants, if we've proven this behavior about it -- then we can trade with it, even though it does not yet exist.

This approach rests on being able to prove certain facts about human behavior; or to put it another way, for humans to be able to provably commit to behavior.

See also

References