Difference between revisions of "Acausal trade"

From Lesswrongwiki
Jump to: navigation, search
m
Line 3: Line 3:
 
== Background: Super-rationality ==
 
== Background: Super-rationality ==
  
The concept of acausal trade emerges out of the much-debated question of how to acheive cooperation on a one-shot Prisoner's Dilemma has long been debated. On the one hand, defection always produces a better individual result that cooperation. On the other hand, this leads to an equilibrium of Defect/Defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will do better than the Defect/Defect equilibrium.
+
The concept of acausal trade emerges out of the much-debated question of how to acheive cooperation on a one-shot Prisoner's Dilemma. On the one hand, an agent considering the causal consequences of a decision to cooperate or to defect finds that defection always produces a better result. On the other hand, if the other agent  symmetrically reasons this way, the result is an equilibrium of Defect/Defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will each do better, on their own individual utility measure, than the Defect/Defect equilibrium. The question is what  decision theory allows this beneficial equilibrium rather than falling into the negative-sum defection equilibrium.
  
Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. Gary Drescher (see references) developed the concept, introducing the term "acausal subjunctive morality" for an ethical system of behavior based on this mechanism. Eliezer Yudkowsky has discussed this one-shot Prisoner's Dilemma at [http://lesswrong.com/lw/tn/the_true_prisoners_dilemma/ The True Prisoner's Dilemma].
+
Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. The "correct" decision--the decision which maximizes utility for each player if players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize. Players cannot communicate, but each can reason that the others are reasoning similarly. Hofstadter's insight was an important starting point for further investigation of acausal interaction.
  
Acausal trade goes one step beyond this form of cooperation. In superrationality, agent P simply cooperates with agent Q, knowing what Q will do and what Q wants P to do. In contrast, in acausal trade, agent P may also have to surmise the potential existence of Q, and calculate probability estimates about what Q would want P to do and what Q will do for P.
+
Gary Drescher (see references) developed the concept further, introducing the term "acausal subjunctive morality" for an ethical system of behavior based on this mechanism.  
  
 +
Eliezer Yudkowsky has discussed this one-shot Prisoner's Dilemma at [http://lesswrong.com/lw/tn/the_true_prisoners_dilemma/ The True Prisoner's Dilemma].
  
 +
Acausal trade goes one step beyond this form of cooperation. The agents do not need to be identical, or even have the same utility function for themselves. Moreover, the agents do not need to be told in advance what the other agents are like, or even if they exist. In acausal trade, an agent may have to surmise the potential existence of the other agent, and calculate probability estimates about what the other agent would want it to do.
  
 
== Description ==
 
== Description ==
We have agents P and Q, possibly separated so that no interaction is possible. They might be in separate Everett Branches; or they might be outside each other's light cones, e.g., they might be very distant in an expanding universe; or Q might in P's future on Earth; or they might simply be unable to communicate for more prosaic reasons. Alternatively, even if they are able to communicate, as in ordinary trade, the principles of acausal trade apply.
 
  
P and Q each can do something the other wants, and values that thing less than the other does. This is the usual condition for trade. However, ''acausal'' trade can happen even if  P and Q are completely separated and have no opportunity to affect each other causally. One example: An FAI wants the well-being of ''all'' humans, including those in the past with whom it cannot be in causal contact.
+
We have agents P and Q, possibly separated so that no interaction is possible. This can be simply because each is not aware of the location of the other, or is prevented from communicating with the other or doing anything that will practically have an effect on it. One agent may be in the other's future and so unable to affect it. Other less prosaic scenarios are sometimes described as a way of more definitively presenting a situation in which interaction is impossible: For example, the agents are in separate Everett Branches or outside each other's light cones.  
  
In ''acausal'' trade, P and Q cannot count on the usual enforcement mechanisms to ensure cooperation -- there is no  expectation of future interactions or an outside enforcer.
+
In acausal trade, each agent can do something the other wants, and values that thing less than the other does. This is the usual condition for trade. However, ''acausal'' trade can happen even if the two are completely separated and have no opportunity to affect each other causally. One example: An FAI whose utility function is sums up the well-being of ''all'' humans, including those in its past with whom it cannot be in causal contact.
  
P and Q cooperate because each knows that the other can somehow predict its behavior very well, like Omega in [[Newcomb's problem]].
+
In ''acausal'' trade, the agents cannot count on the usual enforcement mechanisms to ensure cooperation -- there is no  expectation of future interactions or an outside enforcer.
 +
 
 +
The agents cooperate because each knows that the other can somehow predict its behavior very well, like Omega in [[Newcomb's problem]].
  
 
Each knows that if it defects (respectively: cooperates), the other will know this, and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.  
 
Each knows that if it defects (respectively: cooperates), the other will know this, and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.  
  
This can also be described as P and Q provably precommitting to cooperate.
+
This can also be described a scenario in which both agents precommit to cooperate, and in which each agent can prove this about the other (or at least gain a probabalistic belief).
  
 
== Prediction mechanisms ==
 
== Prediction mechanisms ==
Acausal trade can occur with a variety of mechanisms for allowing P and Q to predict each other.  
+
Acausal trade can occur with a variety of mechanisms for allowing the agents to predict each other.  
  
 
# They might known each other's  mental architectures (source code).
 
# They might known each other's  mental architectures (source code).
# In particular, P and Q might know that they have identical or similar mental architectures (source code), so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
+
# In particular, they might know that they have identical or similar mental architectures (source code), so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
# They might be able to simulate each other, or to predict the other's behavior analytically. The simulation may be approximate, and the predictions probabilistic, to avoid tractability problems, including the problem of fully simulating something of the same complexity of oneself.
+
# They might be able to simulate each other, or to predict the other's behavior analytically.The simulation may be approximate, and the predictions probabilistic, to avoid tractability problems of simulating something of the same complexity of oneself. (Simulation in this sense does not require full knowlege of the other agents source code and the ability to run the other's source code in precise simulation: Note that humans simulate each other's thoughts to guess what the other would do.)
# But in the broadest sense, acausal trade may be possible for two agents that know nothing at all about each other, except that the other is an extremely powerful superintelligence, in a form of mutual reflective convergence. An optimal superintelligence may come to the conclusion that since reliable acausal trade is supported by the best possible optimization architecture, any other optimal superintelligences ''must'' also support it.
+
# More broadly, acausal trade may be possible for two agents that know nothing at all about each other, except that the other is an extremely powerful superintelligence. Seen mathematically, this is just an optimization problem seeking the best possible algorithm for a given utility function? An optimum may be reached with algorithms in which intelligences sacrifice the opportunity to generate some utility for themselves and instead generate utility for the other's function, while the other symmetrically does the same.
  
 
== Decision Theories  ==
 
== Decision Theories  ==
Line 58: Line 61:
  
 
Ordinary trade is usually analyzed causally: The two parties can talk, exchange information, threaten retribution, promise payment, bind themselves legally. When the trade is  carried out as promised, or else one or more parties defect, they can give rewards or carry out threats.  
 
Ordinary trade is usually analyzed causally: The two parties can talk, exchange information, threaten retribution, promise payment, bind themselves legally. When the trade is  carried out as promised, or else one or more parties defect, they can give rewards or carry out threats.  
 
+
of
 
Nonetheless, from the perspective of [[Updateless decision theory]] and related acausal theories, even ordinary trade can be analyzed acausally. Using UDT, an agent's optimal choice is to analyze a timeless outside view of the full range of possibilities, including all the threats and promises, rewards and promises, and choose the best one. And whether or not the agent actually uses UDT, we can analyze all trade  using a timeless, acausal perspective.
 
Nonetheless, from the perspective of [[Updateless decision theory]] and related acausal theories, even ordinary trade can be analyzed acausally. Using UDT, an agent's optimal choice is to analyze a timeless outside view of the full range of possibilities, including all the threats and promises, rewards and promises, and choose the best one. And whether or not the agent actually uses UDT, we can analyze all trade  using a timeless, acausal perspective.
  
Line 76: Line 79:
 
* [http://www.raikoth.net/Stuff/story1.html A story] that shows acausal trade in action.
 
* [http://www.raikoth.net/Stuff/story1.html A story] that shows acausal trade in action.
 
* "[http://www.nickbostrom.com/papers/porosity.pdf Hail Mary, Value Porosity, and Utility Diversification]," Nick Bostrom, the first paper from academia to rely on the concept of acausal trade.
 
* "[http://www.nickbostrom.com/papers/porosity.pdf Hail Mary, Value Porosity, and Utility Diversification]," Nick Bostrom, the first paper from academia to rely on the concept of acausal trade.
* [http://intelligence.org/files/TowardIdealizedDecisionTheory.pdf Towards an idealized decision theory], Nate Soares discusses concepts in decision theory related to acausal trade.
+
* [http://intelligence.org/files/TowardIdealizedDecisionTheory.pdf Towards an idealized decision theory], by Nate Soares and Benja Fallenstein discusses acausal interaction scenarios that shed light on new directions in decision theory.  
  
 
== References ==
 
== References ==

Revision as of 06:20, 9 January 2015

In acausal trade, agents cooperate by each predicting what the other wants and doing it, even though they might have absolutely no way of every communicating or affecting each other.

Background: Super-rationality

The concept of acausal trade emerges out of the much-debated question of how to acheive cooperation on a one-shot Prisoner's Dilemma. On the one hand, an agent considering the causal consequences of a decision to cooperate or to defect finds that defection always produces a better result. On the other hand, if the other agent symmetrically reasons this way, the result is an equilibrium of Defect/Defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will each do better, on their own individual utility measure, than the Defect/Defect equilibrium. The question is what decision theory allows this beneficial equilibrium rather than falling into the negative-sum defection equilibrium.

Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. The "correct" decision--the decision which maximizes utility for each player if players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize. Players cannot communicate, but each can reason that the others are reasoning similarly. Hofstadter's insight was an important starting point for further investigation of acausal interaction.

Gary Drescher (see references) developed the concept further, introducing the term "acausal subjunctive morality" for an ethical system of behavior based on this mechanism.

Eliezer Yudkowsky has discussed this one-shot Prisoner's Dilemma at The True Prisoner's Dilemma.

Acausal trade goes one step beyond this form of cooperation. The agents do not need to be identical, or even have the same utility function for themselves. Moreover, the agents do not need to be told in advance what the other agents are like, or even if they exist. In acausal trade, an agent may have to surmise the potential existence of the other agent, and calculate probability estimates about what the other agent would want it to do.

Description

We have agents P and Q, possibly separated so that no interaction is possible. This can be simply because each is not aware of the location of the other, or is prevented from communicating with the other or doing anything that will practically have an effect on it. One agent may be in the other's future and so unable to affect it. Other less prosaic scenarios are sometimes described as a way of more definitively presenting a situation in which interaction is impossible: For example, the agents are in separate Everett Branches or outside each other's light cones.

In acausal trade, each agent can do something the other wants, and values that thing less than the other does. This is the usual condition for trade. However, acausal trade can happen even if the two are completely separated and have no opportunity to affect each other causally. One example: An FAI whose utility function is sums up the well-being of all humans, including those in its past with whom it cannot be in causal contact.

In acausal trade, the agents cannot count on the usual enforcement mechanisms to ensure cooperation -- there is no expectation of future interactions or an outside enforcer.

The agents cooperate because each knows that the other can somehow predict its behavior very well, like Omega in Newcomb's problem.

Each knows that if it defects (respectively: cooperates), the other will know this, and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.

This can also be described a scenario in which both agents precommit to cooperate, and in which each agent can prove this about the other (or at least gain a probabalistic belief).

Prediction mechanisms

Acausal trade can occur with a variety of mechanisms for allowing the agents to predict each other.

  1. They might known each other's mental architectures (source code).
  2. In particular, they might know that they have identical or similar mental architectures (source code), so that the each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
  3. They might be able to simulate each other, or to predict the other's behavior analytically.The simulation may be approximate, and the predictions probabilistic, to avoid tractability problems of simulating something of the same complexity of oneself. (Simulation in this sense does not require full knowlege of the other agents source code and the ability to run the other's source code in precise simulation: Note that humans simulate each other's thoughts to guess what the other would do.)
  4. More broadly, acausal trade may be possible for two agents that know nothing at all about each other, except that the other is an extremely powerful superintelligence. Seen mathematically, this is just an optimization problem seeking the best possible algorithm for a given utility function? An optimum may be reached with algorithms in which intelligences sacrifice the opportunity to generate some utility for themselves and instead generate utility for the other's function, while the other symmetrically does the same.

Decision Theories

Acausal trade does not rely on the notions of causality, temporal ordering, and evidence which typically serve as inputs into the common forms of Decision theory, including Causal decision theory and Evidential Decision Theory. Thus, decisions on acausal trade must rely on other recently developed, decision theories, such as Timeless decision theory and Updateless decision theory.

An example of acausal trade with simple resource requirements

At its most abstract, the agents in this model are simply optimization algorithms, and agents can run each other as subalgorithms if that is useful in improving optimization power. Let T be an algorithm for which time is most valuable in optimizing its utility function; while for S, space is most valuable.

T and S must both deal with subjective uncertainty as to whether they will be instantiated in an environment where time is in abundance (an environment which will not end for a long time), or a space-rich universe (an extremely large environment). (This uncertainly is sometimes expressed in terms of multiverse theory, but only requires that we specify subjective uncertainty as is typical in decision theory, i.e., that the algorithms should be able to optimize in either kind of environment, weighting the environment by its probability.)

The question is what algorithm for T, and respectively for S, provides the most optimization power, the highest expected value given the probability distributions on the environment.

T can prove about S that it will "do a favor" for T, i.e., that S will run T as a subalgorithm, if S finds itself instantiated in a time-rich environment. S can prove that T will "reciprocate" -- i.e., T will run S if it finds itself in a space-rich environment, in acausal trade for S's proven (possibly probabilistically proven) commitment.

T and S also estimate that the other is likely to exist. This is because super-powerful intelligences (optimizers) are an attractor in any universe that has a general intelligence at all, since any general intelligence will want to self-improve in order to better optimize towards its goals. Given that superintelligences exist in other possible universes, T and S will realize that acausal trade is one of the tricks a good optimizer may well use to optimize its utility function, thus implying the likely existence of the superintelligent acausal trading partner.

Acausal trade with complex resource requirements

In the example above, resource requirements are very simple: Time and space. It may be that in the limit of increasing intelligence, time and space are the only relevant resources. But for an agent with complex and arbitrary goals, like humans, any potential trading partner will not consider the possibility of acausal trade to be likely enough that the partner would to do any acausal favors directly for the agent.

However, the agents can analyze the distribution of probabilities across a multiverse, or equivalently, simply in terms of probability distributions.

Agents and trading partners exist in many near-identical instances across the multiverse. If an agent P decides to randomly sample a tiny fraction of all possible types of acausal trading partner Q, and the almost-identical counterparts of P elsewhere in the multiverse do likewise, then in aggregate the near-copies of P will sample all of the major types of acausal trading partners Q, in rough proportion to their frequency, if the models are good. Then all these near-identical instances of the agent P produce stuff the sampled trading partners Q would want, if the potential partners Q are running a symmetric algorithm that would produce stuff that the agents want (if they sampled them in return). The computational effort is shared out across the ensemble of P, but this still makes the goods which given and received come into rough balance, as long as the samples are representative.

Thus, each agent P is producing stuff only for a tiny fraction of potential trading partners Q, and cannot expect to trade one-on-one with any given trading partner. But because there are a huge number of near-copies of that agent P, each trading partner Q has a good chance that one of the ensemble P will do what Q wants.

Ordinary trade

Ordinary trade is usually analyzed causally: The two parties can talk, exchange information, threaten retribution, promise payment, bind themselves legally. When the trade is carried out as promised, or else one or more parties defect, they can give rewards or carry out threats. of Nonetheless, from the perspective of Updateless decision theory and related acausal theories, even ordinary trade can be analyzed acausally. Using UDT, an agent's optimal choice is to analyze a timeless outside view of the full range of possibilities, including all the threats and promises, rewards and promises, and choose the best one. And whether or not the agent actually uses UDT, we can analyze all trade using a timeless, acausal perspective.

Relevance to Friendly AI

Acausal trade may be a way to get the cooperation of a future AI. If we know that

  • the AI would want us to behave a certain way,
  • and we can prove that it will do good things for us, once it arises, if we do what it wants now,
  • and that it can prove the symmetrical statement -- i.e., that we do what it wants, if we've proven this behavior about it --

then we can trade with it, even though it does not yet exist.

This approach rests on being able to prove certain facts about human behavior; or to put it another way, for humans to be able to commit to behavior.

See also

References