Acausal trade

From Lesswrongwiki
Revision as of 23:45, 9 January 2015 by JoshuaFox (talk | contribs) (Decision Theories)
Jump to: navigation, search

In acausal trade, agents cooperate by each predicting what the other wants and doing it, even though they might have absolutely no way of every communicating or affecting each other, nor any direct evidence that the other exists.

Background: Super-rationality and the one-shot Prisoner's Dilemma

The concept of acausal trade emerged out of the much-debated question of how to achieve cooperation on a Prisoner's Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, an player in the one-shot Prisoner's Dilemma considering the causal consequences of a decision finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is an equilibrium of Defect/Defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will each do better, on their own individual utility measure, than the Defect/Defect equilibrium. The question is what decision theory allows this beneficial equilibrium in the one-shot Prisoner's Dilemma, rather than falling into the negative-sum defection equilibrium.

Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. The "correct" decision--the decision which maximizes utility for each player, if players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize. Players cannot communicate, but each can reason that the others are reasoning similarly. Hofstadter's insight was an important starting point for further investigation of acausal game theory.

Gary Drescher (see references) developed the concept further, introducing the term "acausal subjunctive morality" for an ethical system of behavior based on this mechanism.

Acausal trade goes one step beyond super-rationality. The agents do not need to be identical, nor do they need to have the same utility function for themselves. Moreover, the agents do not need to be told in advance what the other agents are like, or even if they exist. In acausal trade, an agent may have to surmise the potential existence of the other agent, and calculate probability estimates about what the other agent would want it to do.

Description

We have two (or more) agents, possibly separated so that no interaction is possible. This can be simply because each is not aware of the location of the other; or is prevented from communicating with the other, or doing anything that will practically have an effect on the other. The agent may be in the other's future, and so unable to affect it.

In order to more clearly illustrate a situation in which interaction is absolutely impossible, other less prosaic scenarios are sometimes described: For example, the agents may be outside each other's light cones, or they may be in separate worlds in a multiverse. The Everett multiworld interpretation of quantum mechanics is sometimes used for this example, but is not necessary: we can talk of different counterfactual "impossible possible worlds," or simply talk about probability distributions that an agent has for different possibilities.

In acausal trade, each agent can do something the other wants, and values that thing less than the other does. This is the usual condition for trade. However, acausal trade can happen even if the two are completely separated and have no opportunity to affect each other causally. One example: An FAI whose utility function takes into account the well-being of all humans, including not only those in its present and its future but also those in its past. (Note that the seemingly exotic utility functions brought into the discussion, including this one, are actually not so outlandish: Many humans value the wellbeing of other humans, and are distressed at harm done to people in the past, even though they can do nothing to change this.)

In acausal trade, the agents cannot count on the usual enforcement mechanisms to ensure cooperation--there is no expectation of future interactions or an outside enforcer. The agents cooperate because each knows that the other can somehow predict its behavior very well, like Omega in Newcomb's problem. Each knows that if it defects (respectively: cooperates), the other will know this, and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.

Acausal trade can also be described in terms of precommitment: Both agents precommit to cooperate, and each has reason to think that the other is also precommitting. gain a probabalistic belief).

Prediction mechanisms

Acausal trade can occur with a variety of mechanisms for allowing the agents to conclude that the other exists, and to predict each other's behavior.

1. For the belief that the trading partner exists, we may have the simple case in which the agents are told this as part of the setup for the game. But more interesitng is the case in which the agent concludes that the other is likely to exist. There is no need for it to be certain of this: As with all beliefs, a certain subjective probability about the other's existence suffices.

A superintelligence might conclude that other superintelligences would exist in other worlds because superintelligence (powerful optimization ability) [Basic_AI_drives|is an attractor] for superintelligences. The algorithms will take into account that acausal trade is one of the tricks a good optimizer may well use to optimize its utility function, thus implying the likely existence of the superintelligent acausal trading partner.

To take a more prosaic example, a person might conclude that other people are in similar situations because there are mant humans with similar mental architectures, all of whom want similar things for themselves and many of whom face the same challenges and resource constraints. 2. Diffrent paths have been desribed for the agents to preduct each other's behavior:

  1. They might known each other's mental architectures (source code).
  2. In particular, they might know that they have identical or similar mental architecture, so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
  3. They might be able to simulate each other, or to predict the other's behavior analytically. The simulation may be approximate, and the predictions probabilistic, to avoid tractability problems of simulating something of the same complexity of oneself. (Simulation in this sense does not require full knowlege of the other agent's source code and the ability to run the other's source code in precise simulation: Even we humans simulate each other's thoughts, imprecisely, to guess what the other would do.)
  4. More broadly, acausal trade may be possible for two agents that know nothing at all about each other, except that the other is an extremely powerful superintelligence. Seen mathematically, this is just an optimization problem seeking the best possible algorithm for an agent's utility function. An optimum may be reached with algorithms in which each intelligence sacrifices the opportunity to generate some utility for itself, and instead generates utility for the other, while the other symmetrically does the same. In this case, any other would be suboptimal, both sides would know that, and so neither would deviate from the optimum. (If, counterfactually, one agent were to find it best to do so, the other would conclude that this was the case and so defect, which would reduce the first one's utility.)

Decision Theories

The reasoning behind acausal trade resembles that behind recently developed decision theories, such as Updateless decision theory and Timeless decision theory. Like UDT and TDT, acausal trade takes into account reflexive considerations about the algorithms of the agent and of other players, unlike better-known forms of Decision theory, such as Causal decision theory (as well as Evidential Decision Theory). In CDT, the agent's algorithm (implementation) is treated as uncaused by the rest of the universe, so that it only influences the universe through its decision and subsequent action. In contrast, in UDT and TDT, the agents' own algorithms are treated as causal nodes, influenced by other factors, such as the logical requirement of optimality in a utility-function maximizer which "causes" them to make certain choices. In these theories and in acausal trade, the agent cannot escape the fact that its decision to defect or cooperate constitutes strong Bayesian evidence as to what the other agent will do, and so it is better off cooperating in the acausal trade.

An example of acausal trade with simple resource requirements

At its most abstract, the agents in this model are simply optimization algorithms, and agents can run each other as subalgorithms if that is useful in improving optimization power. Let T be a utility function for which time is most valuable as a resource; while for utility function S, space is most valuable.

In choosing the best algorithms for T and S, we must deal with subjective uncertainty as to the environments in which they will operate. For our toy example, there is some probability that the algorithm will be run in an environment where time is in abundance and some probability that it will be run in a space-rich universe.

(This uncertainty is sometimes expressed in terms of multiverse theory, but only requires ordinary subjective uncertainty as is typical in decision theory, i.e., that the algorithms should be able to optimize in either kind of environment, weighting each environment by its probability.)

If the algorithm for T is instantiated in a space-rich environment, it will only be able to gain a small amount of utility for itself, but S would be able to gain a lot of utility; and vice versa.

The question is what algorithm for T, and respectively for S, provides the most optimization power, the highest expected value given the probability distributions on the environment.

The algorithm for T can prove about S's algorithm that it will "do a favor" for the utility function T, i.e., that S's algorithm will run T's algorithm as a subalgorithm, if S's algorithm finds itself instantiated in a time-rich environment. Symmetric statements can be made reversing S and T.

The algorithms also estimate that an agent seeking the other utility function, and ready to acausally trade, is a strong possibility and likely to exist in other possibly universes.

Acausal trade with complex resource requirements

In the example above, resource requirements are very simple: Time and space. It may be that in the limit of increasing intelligence, time and space are the only relevant resources. But for an agent with complex and arbitrary goals, like humans, any potential trading partner will not consider the possibility of acausal trade to be likely enough that the partner would to do any acausal favors directly for the agent.

However, the agents can analyze the distribution of probabilities across a multiverse, or equivalently, simply in terms of probability distributions.

Agents and trading partners exist in many near-identical instances across the multiverse. If an agent P decides to randomly sample a tiny fraction of all possible types of acausal trading partner Q, and the almost-identical counterparts of P elsewhere in the multiverse do likewise, then in aggregate the near-copies of P will sample all of the major types of acausal trading partners Q, in rough proportion to their frequency, if the models are good. Then all these near-identical instances of the agent P produce stuff the sampled trading partners Q would want, if the potential partners Q are running a symmetric algorithm that would produce stuff that the agents want (if they sampled them in return). The computational effort is shared out across the ensemble of P, but this still makes the goods which given and received come into rough balance, as long as the samples are representative.

Thus, each agent P is producing stuff only for a tiny fraction of potential trading partners Q, and cannot expect to trade one-on-one with any given trading partner. But because there are a huge number of near-copies of that agent P, each trading partner Q has a good chance that one of the ensemble P will do what Q wants.

Ordinary trade

Ordinary trade is usually analyzed causally: The two parties can talk, exchange information, threaten retribution, promise payment, and bind themselves legally. When the trade is carried out as promised, or else one or more parties defect, they can give rewards or carry out threats.

Nonetheless, from the perspective of Updateless decision theory and related acausal theories, even ordinary trade can be analyzed acausally. Using UDT, an agent's optimal choice is to analyze a timeless outside view of the full range of possibilities, taking into account all the threats and promises, rewards and promises, and choose the best one. And whether or not the agent actually uses UDT, we can analyze all trade using a timeless, acausal perspective: Optimal agents are simply choosing the best algorithm, which may involve trade.

Relevance to Friendly AI

Acausal trade may be a way to get the cooperation of a future AI. If we know that

  • the AI would want us to behave a certain way,
  • and we can prove that it will do good things for us, once it arises, if we do what it wants now,
  • and that it can prove the symmetrical statement -- i.e., that we do what it wants, if we've proven this behavior about it --

then we can trade with it, even though it does not yet exist.

This approach rests on being able to prove certain facts about human behavior; or to put it another way, for humans to be able to commit to behavior.

See also


References