Difference between revisions of "Acausal trade"

From Lesswrongwiki
Jump to: navigation, search
(Inference/Prediction mechanisms)
(Inference/Prediction mechanisms)
Line 31: Line 31:
 
To take a more prosaic example, a person might conclude that other people are in similar situations and thinking similarly because there are many humans with similar mental architectures, all of whom want similar things for themselves and many of whom face the same challenges and resource constraints.
 
To take a more prosaic example, a person might conclude that other people are in similar situations and thinking similarly because there are many humans with similar mental architectures, all of whom want similar things for themselves and many of whom face the same challenges and resource constraints.
  
2. Different paths have been described for the agents to predict each other's behavior:
+
Different paths have been described for the agents to predict each other's behavior:
 
# They might known each other's  mental architectures (source code).
 
# They might known each other's  mental architectures (source code).
 
# In particular, they might know that they have identical or similar mental architecture, so that each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."  
 
# In particular, they might know that they have identical or similar mental architecture, so that each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."  

Revision as of 04:31, 16 January 2015

In acausal trade, agents cooperate by each predicting what the other wants and doing it, even though they might have no way of communicating or affecting each other, nor even any direct evidence that the other exists.

Background: Super-rationality and the one-shot Prisoner's Dilemma

The concept of acausal trade emerged out of the much-debated question of how to achieve cooperation on a Prisoner's Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, a player in the one-shot Prisoner's Dilemma who is considering the causal consequences of a decision finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is an equilibrium of Defect/Defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will each do better, on their own individual utility measure, than the Defect/Defect equilibrium. The question is what decision theory allows this beneficial cooperation equilibrium.

Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. Players cannot communicate, but each can reason that the others are reasoning similarly. The "correct" decision--the decision which maximizes utility for each player, if players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize. Hofstadter's insight was an important starting point for further investigation of acausal game theory.

Gary Drescher (see references) developed the concept further, introducing an ethical system called "acausal subjunctive morality" based on a similar mechanism to super-rationality. Drescher's approach relies on the agents being identical or at least similar, so that each agent can reasonably guess what the other will do based on facts about its own behavior. If it defects, the other will defect, and respectively for cooperating, because their "source code" is the same.

Acausal trade goes one step beyond this. The agents do not need to be identical, nor do they need to each have the same utility function for themselves. Moreover, the agents do not need to be told in advance what the other agents are like, or even if they exist. In acausal trade, an agent may have to surmise the potential existence of the other agent, and calculate probability estimates about what the other agent would want it to do in "exchange."

Description

We have two (or more) agents, possibly separated so that no interaction is possible. This can be simply because each is not aware of the location of the other; or else each may be prevented for some reason from communicating with or affecting the other. As another example, one agent may be in the other's future, and so unable to affect it.

Other less prosaic scenarios are sometimes described, in order to more clearly illustrate situations in which interaction is absolutely impossible. For example, the agents may be outside each other's light cones, or they may be in separate worlds in a multiverse. The Everett multiworld interpretation of quantum mechanics is sometimes used for this example. But a multiverse is not necessary: We can talk of different counterfactual "impossible possible worlds" as abstractions, just for the purpose of our calculations, without considering them to exist elsewhere in a multiverse, or we can simply talk about an agent's probability distributions across the different possibilities.

The usual condition for trade is that each agent can do something the other wants, and values that thing less than the other does. This is true in ordinary trade. However, acausal trade can happen even if the two are completely separated and have no opportunity to affect each other causally.

In acausal trade, the agents cannot count on the usual enforcement mechanisms to ensure cooperation--there is no expectation of future interactions or an outside enforcer. The agents cooperate because each knows that the other can somehow predict its behavior very well, like Omega in Newcomb's problem. Each knows that if it defects (respectively: cooperates), the other will know this (perhaps probabalistically), and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.

Acausal trade can also be described in terms of (pre)commitment: Both agents commit to cooperate, and each has reason to think that the other is also committing. "Commitment" can be described as source code that provably will always cooperate in acausal trade scenarios.

Inference/Prediction mechanisms

For acausal trade to occur, both agents must conclude that there is some probability that the other exists, and that the other will trade with it.

For the belief that the trading partner exists, we may have the simple case in which the agents are told this exogenously, as part of the scenario. But more interesting is the case in which the agent concludes that the other is likely to exist. There is no need for it to be certain of this: As with all beliefs, a certain subjective probability about the other's existence suffices.

A superintelligence might conclude that other superintelligences would tend to exist because superintelligence (powerful optimization ability) [Basic_AI_drives|is an attractor] for intelligences. The algorithms will take into account that acausal trade is one of the tricks a good optimizer would tend to use to optimize its utility function.

To take a more prosaic example, a person might conclude that other people are in similar situations and thinking similarly because there are many humans with similar mental architectures, all of whom want similar things for themselves and many of whom face the same challenges and resource constraints.

Different paths have been described for the agents to predict each other's behavior:

  1. They might known each other's mental architectures (source code).
  2. In particular, they might know that they have identical or similar mental architecture, so that each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
  3. They might be able to simulate each other, or to predict the other's behavior analytically. The simulation might be approximate, and the predictions probabilistic, to avoid tractability problems of simulating something of the same complexity as oneself. (Simulation in this sense does not require full knowlege of the other agent's source code, nor the ability to run the other's source code in precise simulation: Even we humans simulate each other's thoughts to guess what the other would do.)
  4. More broadly, acausal trade may be possible for two agents that know nothing at all about each other, except that the other is an extremely powerful superintelligence. Seen mathematically, this is just an optimization problem to find the best possible algorithm for an agent's utility function. An optimum may be reached with algorithms in which each intelligence sacrifices the opportunity to generate some utility for itself, and instead generates utility for the other, while the other symmetrically does the same. Both sides would know that any other choice would be suboptimal, and so neither would deviate from the optimum. (If, counterfactually, one agent were to find it best to do so, that would logically imply that the other would conclude that this was the case and so defect, which would reduce the first one's utility.)

Decision Theories

The reasoning behind acausal trade resembles that behind recently developed decision theories, such as Updateless decision theory. Like UDT, acausal trade takes into account reflexive considerations about the algorithms of the agent and of other players, unlike better-known forms of Decision theory, such as Causal decision theory or Evidential Decision Theory. In CDT and EDT, the agent's algorithm (implementation) is treated as uncaused by the rest of the universe, so that the agent only influences the universe through its decision and subsequent action. In contrast, in UDT, the agents' own algorithms are treated as causal nodes, influenced by other factors, such as the logical requirement of optimality in a utility-function maximizer which "causes" them to make certain choices. In these theories, as in acausal trade, the agent cannot escape the fact that its decision to defect or cooperate constitutes strong Bayesian evidence as to what the other agent will do, and so it is better off cooperating.

Objections

The most common objections to acausal trade resemble the objection to UDT: Why shouldn't the agent choose to defect? Whatever the considerations for cooperation, can't it "at the last moment" choose to "cheat" and get a better result? However, this approach takes into account only the direct effect of the the decision. It treats the agent's decision as severed from the realities that made and influenced the agent. Yet in fact the decision results from the agent's design (its algorithm) together with its observations. If it cheats, that provides evidence that it has an algorithm of the sort that would cheat, and a powerful second player could figure that out. Thus, it is best for the agent if it pursues a policy that the other agent can figure out is a cooperative one.

Other objections are about whether the agent can care about (have a utility function that takes into account) entities with which it can never interact, and which it does not even know exist. Yet this is quite common even today. Humans care about what happens to a fictional character, even when the outcome is predetermined because it was already been written down in the novel. Moreover, humans often care about the suffering of other people in faraway lands which they have no opportunity to influence. In an even more clear-cut example of acausality, we care about the suffering of long-gone historical people, where changing the others' fate is clearly impossible. We humans even care about entities that they are not sure exist: When we hear unreliable reports that a valuable archeological find was destroyed in a distant country, and see that according to some reports the crime did not actually occur and the item allegedly involved never even existed, we still care, even as we recognize the uncertainly in our beliefs and so in our caring.

An example of acausal trade with simple resource requirements

At its most abstract, the agents are simply optimization algorithms. Let T be a utility function for which time is most valuable as a resource; while for utility function S, space is most valuable, and for the purpose of our toy model, assume that these are the only two possible resources.

In choosing the best algorithms for optimizing T and S, we must deal with subjective uncertainty as to the environments in which they will operate. In our example, there is some probability that the algorithm will be run in an environment where time is in abundance, and some probability that it will be run in a space-rich universe.

(This uncertainty is sometimes expressed in terms of multiverses, but only requires ordinary subjective uncertainty, as is typical in decision theory. In other words, the algorithms should be able to optimize in either kind of environment. The goal is to maximize expected utility, weighting each environment by its probability.)

If the algorithm for T is instantiated in a space-rich environment, it will only be able to gain a small amount of utility for itself, but S would be able to gain a lot of utility; and vice versa.

The question is what algorithm for T, and respectively for S, provides the most optimization power, the highest expected value for the agent itself.

The algorithm for T can prove about S's algorithm that it will "do a favor" for the utility function T, i.e., that S's algorithm will run T's algorithm as a subalgorithm, if S's algorithm finds itself instantiated in a time-rich environment. Symmetric statements can be made reversing S and T.

The algorithms also estimate that an agent seeking the other utility function, and ready to acausally trade, has a non-trival probability of existing, given that that the only resources available, per our model, are space and time.

Acausal trade with complex resource requirements

In the toy example above, resource requirements are very simple: Time and space. In general, given that agents can have complex and arbitrary goals, as humans do, an agent would not be able to conclude that a specific trading partner exists and will acausally cooperate.

However, an agent can analyze the distribution of probabilities for the existence of other agents, and weight its actions accordingingly. It will make more effort to do a "favor" for an acausal trading partner that it judges more likely to exist and acausally reciprocate. (Alternatively, this can be described as variants of an agent across a multiverse doing different "favors" for different acausal trading partners.) The goods given and received will come into a good enough balance to benefit the traders, so long as the probabalistic sampling reflects an accurate distribution--as will happen in the limiting case of increasing super-intelligence.

Ordinary trade

Ordinary trade is usually analyzed causally: The two parties can talk, exchange information, threaten retribution, promise payment, and bind themselves legally. When the trade is carried out as promised, or else one or more parties defect, they can give rewards or carry out threats.

Nonetheless, from the perspective of Updateless decision theory and related acausal theories, even ordinary trade can be analyzed acausally. Using UDT, an agent's optimal choice is to analyze a timeless outside view of the full range of possibile algorithm, taking into account the full causal network, including all threats and promises; it chooses the algorithm that gives the best expected utility. Any threats and promises, punishments and rewards are simply part of the utility calculation.

Relevance to Friendly AI

Acausal trade may be a way to get the cooperation of a future AI. If we know that

  • the AI would want us to behave a certain way,
  • and we can prove (perhaps probabalistically) that it will do good things for us, once it arises, if we do what it wants now,
  • and that it can prove the symmetrical statement--i.e., that we do what it wants, if we've proven this behavior about it--

then we can trade with it acausally, even though it does not yet exist.

This approach rests on being able to prove certain facts about human behavior; or for humans to be able to reliably commit to behavior.

See also


References