Back to LessWrong

Acausal trade

From Lesswrongwiki

Jump to: navigation, search

In acausal trade, two agents each benefit by predicting what the other wants and doing it, even though they might have no way of communicating or affecting each other, nor even any direct evidence that the other exists.

Background: Super-rationality and the one-shot Prisoner's Dilemma

The concept of acausal trade emerged out of the much-debated question of how to achieve cooperation on a Prisoner's Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, a player in the one-shot Prisoner's Dilemma who is considering the causal consequences of a decision finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is an equilibrium of Defect/Defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will each do better, on their own individual utility measure, than the Defect/Defect equilibrium. The question is what decision theory allows this beneficial cooperation equilibrium.

Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. Players cannot communicate, but each can reason that the others are reasoning similarly. The "correct" decision--the decision which maximizes utility for each player, if players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize. Hofstadter's insight was an important starting point for further investigation of acausal game theory.

Gary Drescher (see references) developed the concept further, introducing an ethical system called "acausal subjunctive morality" based on a similar mechanism to super-rationality. Drescher's approach relies on the agents being identical or at least similar, so that each agent can reasonably guess what the other will do based on facts about its own behavior. If it defects, the other will defect, and respectively for cooperating, because their "source code" is the same.

Acausal trade goes one step beyond this. The agents do not need to be identical, nor do they need to each have the same utility function for themselves. Moreover, the agents do not need to be told in advance what the other agents are like, or even if they exist. In acausal trade, an agent may have to surmise the potential existence of the other agent, and calculate probability estimates about what the other agent would want it to do in "exchange."

Description

We have two agents, possibly separated so that no interaction is possible.

This can be simply because each is not aware of the location of the other; or else each may be prevented for some reason from communicating with or affecting the other. As another example, one agent may be in the other's future, and so unable to affect it.

Other less prosaic scenarios are sometimes described, in order to more clearly illustrate situations in which interaction is absolutely impossible. For example, the agents may be outside each other's light cones, or they may be in separate worlds in a multiverse. The Everett multiworld interpretation of quantum mechanics is sometimes used for this example. But a multiverse is not necessary: We can talk of different counterfactual "impossible possible worlds" as abstractions, just for the purpose of our calculations, without considering them to exist elsewhere in a multiverse, or we can simply talk about an agent's probability distributions across the different possibilities.

The usual condition for trade is that each agent can do something the other wants, and values that thing less than the other does. This is true in ordinary trade. However, acausal trade can happen even if the two are completely separated and have no opportunity to affect each other causally.

In acausal trade, the agents cannot count on the usual enforcement mechanisms to ensure cooperation--there is no expectation of future interactions or an outside enforcer. The agents cooperate because each knows that the other can somehow predict its behavior very well, like Omega in Newcomb's problem. Each knows that if it defects (respectively: cooperates), the other will know this (perhaps probabalistically), and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.

Acausal trade can also be described in terms of (pre)commitment: Both agents commit to cooperate, and each has reason to think that the other is also committing. "Commitment" can be described as source code that provably will always cooperate in acausal trade scenarios.

Prediction mechanisms

For acausal trade to occur, each agent must infer that there is some probability that the other exists, and that the other will trade with it.

For the belief that the trading partner exists, we may have the simple case in which the agents are told this exogenously, as part of the scenario. But more interesting is the case in which the agent concludes that the other is likely to exist.

A superintelligence might conclude that other superintelligences would tend to exist because increased intelligence (powerful optimization ability) [Basic_AI_drives|is an attractor] for agents. Given the existence of a superintelligence, acausal trade is one of the tricks it would tend to use.

To take a more prosaic example, a person might realize that humans tend to be alike: Even without knowing about specific acausal trading partners, they know there there exist other people with similar situations, goals, desires, challenges, resource constraints, and mental architectures.

Once an agent realizes that other agents might exist, there are different ways of describing how it can predict another agent's behavior, and specifically that the other agent can do something the first agent wants and that it will do so in "exchange" for the first agent's behavior.

  1. They might known each other's mental architectures (source code).
  2. In particular, they might know that they have identical or similar mental architecture, so that each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
  3. They might be able to simulate each other, or to predict the other's behavior analytically. The simulation might be approximate, and the predictions probabilistic. (Simulation in this sense does not require full knowlege of the other agent's source code, nor the ability to run the other's source code in precise simulation: Even we humans simulate each other's thoughts to guess what the other would do.)
  4. More broadly, acausal trade may be possible for two agents each of which knows nothing about the each other's algorithm (source code, mental architecture). It is enough to know (probabalistically) that the other is a powerful optimizer, that it has a certain utility function, and that it get different utility from different resources. Seen mathematically, this is just an optimization problem: What is the best possible algorithm for an agent's utility function? An optimum may be reached with algorithms in which each intelligence sacrifices the opportunity to generate some utility for itself, and instead generates utility for the other, while the other symmetrically does the same. To prove this by contradiction, assume, counterfactually, that one agent can achieve optimal utility by defecting. Then, symmetrically, the other could do the same, resulting in Defect/Defect outcome that is worse than a Cooperate/Cooperate outcome and so suboptimal, contradicting the assumption. So, an optimal agent would cooperate in the acausal trade.

Decision Theories

The reasoning behind acausal trade resembles that behind recently developed decision theories, such as Updateless decision theory. Like UDT, but unlike better-known forms of Decision theory, such as Causal decision theory or Evidential Decision Theory, acausal trade takes into account reflexive considerations about the algorithms of the agent and of other players.

In CDT and EDT, the agent's algorithm (implementation) is treated as uncaused by the rest of the universe, so that though the agent's *decision* and subsequent action can make a difference, its internal make-up cannot (except through that decision). In contrast, in UDT, the agents' own algorithms are treated as causal nodes, influenced by other factors, such as the logical requirement of optimality in a utility-function maximizer. In UDT, as in acausal trade, the agent cannot escape the fact that its decision to defect or cooperate constitutes strong Bayesian evidence as to what the other agent will do, and so it is better off cooperating.

Objections

The most common objections to acausal trade resemble the objection to UDT: Why shouldn't the agent choose to defect? Whatever the considerations for cooperation, can't it "at the last moment" choose to back out and get a better result by "cheating"? However, this approach takes into account only the direct effect of the decision. Yet in fact the choice of decision provides strong evidence for the agent's design (its algorithm). If it defects, that provides evidence that it has an algorithm of the sort that would defect, and a powerful second player could figure that out. Thus, it is best for the agent if it pursues a policy that the other agent can figure out is a cooperative one.

Another objections: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? Yet this is quite common even for humans today. We care about the suffering of other people in faraway lands whom we have no opportunity to influence. In an even more clear-cut example of acausality, we are disturbed by the suffering of long-gone historical people, and wish that, counterfactually, the suffering had not happened, though we know that changing history is impossible. We even care about entities that we are not sure exist: For example: We might read some news report that a valuable archeological find was destroyed in a distant country, yet according to other news reports, the story was made up, the crime did not actually occur, and the item allegedly involved never even existed. Yet we still care, even as we recognize the uncertainty in our beliefs and so in our caring.

An example of acausal trade with simple resource requirements

At its most abstract, the agents are simply optimization algorithms. As a toy example, let T be a utility function for which time is most valuable as a resource; while for utility function S, space is most valuable, and for the purpose of our toy model, assume that these are the only two possible resources.

We will now choose the best algorithms for optimizing T and S. To avoid anthropomorphizing and asking what the agents will decide to do, we simply ask which algorithm--which source code--would give the highest expected utility for a given utility function. Thus, the choice of source code is "timeless": We treat it as an optimization problem across all possible strings of source code.

We specify that there is some probability that the agent for T (and likewise for S) will be run in an environment where time is in abundance, and some probability that it will be run in a space-rich universe.

(This uncertainty is sometimes expressed in terms of multiverses, but we need only to use a typical decision framework in which the probability is multiplied by utility to calculated expected value.)

If the algorithm for T is instantiated in a space-rich environment, it will only be able to gain a small amount of utility for itself, but S would be able to gain a lot of utility; and vice versa.

The question is what algorithm for T, and respectively for S, provides the most optimization power, the highest expected value for the agent itself.

If it turns out that the environment is space-rich, the agent for T may run the agent (the algorithm) for S, increasing the utility for S, and symmetrically the reverse. This will happen if each of these can prove, perhaps probabilistically, the that optimum occurs when the other agent has the "trading" feature, and so will in fact have that feature.

Acausal trade with complex resource requirements

In the toy example above, resource requirements are very simple: Time and space. In general, given that agents can have complex and arbitrary goals requiring a complex mix of resources, an agent would not be able to conclude that a specific trading partner exists and will acausally cooperate.

However, an agent can analyze the distribution of probabilities for the existence of other agents, and weight its actions accordingly. It will do acausal "favors" for one or more trading partners, weighting its effort accoridng to its subjective probability that the partner will exist and acausally reciprocate. (Alternatively, this can be described as variants of an agent across a multiverse doing different "favors" for different acausal trading partners who are likewise distributed across a multiverse.) The expectation on utility given and received will come into a good enough balance to benefit the traders, so long as the probabalistic sampling reflects an accurate distribution--as will happen in the limiting case of increasing super-intelligence.

Ordinary trade

Ordinary trade is usually analyzed causally: The two parties can have causal influence on each other when they share information, threaten retribution, promise payment, and bind themselves legally. Then, when the trade is carried out as promised, or else one or more parties defect, they can give rewards or carry out threats.

Even ordinary scenarios can be analyzed acausally, using a perspective similar to that of Updateless decision theory We ask: Which algorithm should an agent have to get the best expected value, summing across all possible environments weighted by their probability? The possible environments include those in which threats and promises have been made.

Relevance to Friendly AI

Acausal trade may be a way to cooperate with a future AI--before it exists. If we know that

  • the AI would want us to behave a certain way,
  • and we can prove (probabalistically) that it will do good things for us, once it arises, if we do what it wants now,
  • and that it is likely to be able to prove the symmetrical statement--i.e., that we do what it wants, if we've proven this behavior about it,

then we can trade with it acausally, even though it does not yet exist.

This approach rests on humans to be able to reliably commit to behavior, or at least on the existence of certain provable facts about human behavior.

See also


References