In acausal trade, two agents each benefit by predicting what the other wants and doing it, even though they might have no way of communicating or affecting each other, nor even any direct evidence that the other exists.
- 1 Background: Super-rationality and the one-shot Prisoner's Dilemma
- 2 Description
- 3 Prediction mechanisms
- 4 Decision Theories
- 5 Limitations and Objections
- 6 An example of acausal trade with simple resource requirements
- 7 Acausal trade with complex resource requirements
- 8 Ordinary trade
- 9 Counterpoints
- 10 See also
- 11 References
Background: Super-rationality and the one-shot Prisoner's Dilemma
The concept of acausal trade emerged out of the much-debated question of how to achieve cooperation on a Prisoner's Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, a player in the one-shot Prisoner's Dilemma who is considering the causal consequences of a decision finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is an equilibrium of Defect/Defect, which is bad for both agents. If they can somehow converge on mutual cooperation, they will each do better, on their own individual utility measure, than the Defect/Defect equilibrium. The question is what decision theory allows this beneficial cooperation equilibrium.
Douglas Hofstadter (see references) coined the term "super-rationality" to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other's identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. Players cannot communicate, but each can reason that the others are reasoning similarly. The "correct" decision--the decision which maximizes utility for each player, if players symmetrically make the same decision--is to randomize a one-in-20 chance of asking for the prize. Hofstadter's insight was an important starting point for further investigation of acausal game theory.
Gary Drescher (see references) developed the concept further, introducing an ethical system called "acausal subjunctive morality" based on a mechanism similar to super-rationality. Drescher's approach relies on the agents being identical or at least similar, so that each agent can reasonably guess what the other will do based on facts about its own behavior. If it defects, the other will defect, and respectively for cooperating, because their "source code" is the same.
Acausal trade goes one step beyond this. The agents do not need to be identical, nor do they need to each have the same utility function for themselves. Moreover, the agents do not need to be told in advance what the other agents are like, or even if they exist. In acausal trade, an agent may have to surmise the potential existence of the other agent, and calculate probability estimates about what the other agent would want it to do in "exchange."
We have two agents, possibly separated so that no interaction is possible. The separation can be simply because each is not aware of the location of the other; or else each may be prevented for some reason from communicating with or affecting the other. As another example, one agent may be in the other's future, and so unable to affect it.
Other less prosaic scenarios are sometimes described, in order to more clearly illustrate situations in which interaction is absolutely impossible. For example, the agents may be outside each other's light cones, or they may be in separate worlds in a multiverse. The Everett multiworld interpretation of quantum mechanics is sometimes used for this example. But a multiverse is not necessary: We can talk of different counterfactual "impossible possible worlds" as abstractions, just for the purpose of our calculations, without considering them to exist elsewhere in a multiverse, or we can simply talk about an agent's probability distributions across the different possibilities.
The usual condition for trade is that each agent can do something the other wants, and values that thing less than the other does. This is true in ordinary trade as well as in acausal trade. However, acausal trade can happen even if the two are completely separated and have no opportunity to affect each other causally.
In acausal trade, the agents cannot count on the usual enforcement mechanisms to ensure cooperation--there is no expectation of future interactions or an outside enforcer. The agents cooperate because each knows that the other can somehow predict its behavior very well, like Omega in Newcomb's problem. Each knows that if it defects (respectively: cooperates), the other will know this (perhaps probabilistically), and defect (respectively: cooperate), and so the best choice is to cooperate, since, as usual in trade, Cooperate/Cooperate is better for both sides than Defect/Defect.
Acausal trade can also be described in terms of (pre)commitment: Both agents commit to cooperate, and each has reason to think that the other is also committing. "Commitment" can be described as source code that provably will always cooperate in acausal trade scenarios.
For acausal trade to occur, each agent must infer that there is some probability that the other exists, and that the other will trade with it.
For the belief that the trading partner exists, we may have the simple case in which the agents are told this exogenously, as part of the scenario. But more interesting is the case in which the agent concludes that the other is likely to exist.
A superintelligence might conclude that other superintelligences would tend to exist because increased intelligence is an attractor for agents. Given the existence of a superintelligence, acausal trade is one of the tricks it would tend to use.
To take a more prosaic example, a person might realize that humans tend to be alike: Even without knowing about specific acausal trading partners, they know there there exist other people with similar situations, goals, desires, challenges, resource constraints, and mental architectures.
Once an agent realizes that other agents might exist, there are different ways of describing how it can predict another agent's behavior, and specifically that the other agent can do something the first agent wants and that it will do so in "exchange" for the first agent's behavior.
- They might known each other's mental architectures (source code).
- In particular, they might know that they have identical or similar mental architecture, so that each one knows that its own mental processes approximately simulate the other's. See Gary Drescher's "acausal subjunctive cooperation."
- They might be able to simulate each other, or to predict the other's behavior analytically. The simulation might be approximate, and the predictions probabilistic. (Simulation in this sense does not require full knowlege of the other agent's source code, nor the ability to run the other's source code in precise simulation: Even we humans simulate each other's thoughts to guess what the other would do.)
- More broadly, acausal trade may be possible for two agents each of which knows nothing about the each other's algorithm (source code, mental architecture). It is enough to know (probabilistically) that the other is a powerful optimizer, that it has a certain utility function, and that it get different utility from different resources. Seen mathematically, this is just an optimization problem: What is the best possible algorithm for an agent's utility function? An optimum may be reached with algorithms in which each intelligence sacrifices the opportunity to generate some utility for itself, and instead generates utility for the other, while the other symmetrically does the same. To prove this by contradiction, assume, counterfactually, that one agent can achieve optimal utility by defecting. Then, symmetrically, the other could do the same, resulting in Defect/Defect outcome that is worse than a Cooperate/Cooperate outcome and so suboptimal, contradicting the assumption. So, an optimal agent would cooperate in the acausal trade.
Acausal trade is a special case of Updateless decision theory. Unlike better-known forms of Decision theory, such as Causal decision theory, acausal trade and UDT take into account the agent's own algorithm as cause and caused.
In Causal Decision Theory, the agent's algorithm (implementation) is treated as uncaused by the rest of the universe, so that though the agent's *decision* and subsequent action can make a difference, its internal make-up cannot (except through that decision). In contrast, in UDT, the agents' own algorithms are treated as causal nodes, influenced by other factors, such as the logical requirement of optimality in a utility-function maximizer. In UDT, as in acausal trade, the agent cannot escape the fact that its decision to defect or cooperate constitutes strong Bayesian evidence as to what the other agent will do, and so it is better off cooperating.
Limitations and Objections
Acausal trade only works if the agents are smart enough to predict each other's behavior, and then smart enough to acausally trade. If one agent is stupid enough to "defect," and the second is smart enough to predict the first, then neither will cooperate.
Also, as in regular trade, acausal trade only works if the weaker side can offer resources that make it worth the stronger side's transaction costs.
Even positing near-equal power and tradeable resoruces, a common objection to acausal trade resemble the objection to UDT: Why shouldn't the agent choose to defect? Whatever the considerations for cooperation, can't it "at the last moment" choose to back out and get a better result by "cheating"? However, this approach takes into account only the direct effect of the decision, while in fact the choice of decision provides strong evidence for the agent's internal design. If it would defect, that is evidence that it has an algorithm of the sort that would defect, and a sufficiently intelligent trading partner could predict that, if only probabilistically. Thus, it is best for the agent if it pursues a policy that the other agent can figure out is a cooperative one.
Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands whom we have no opportunity to influence. And in an even more clear-cut example of acausality, we are disturbed by the suffering of long-gone historical people, and wish that, counterfactually, the suffering had not happened, though we know that changing history is impossible. We even care about entities that we are not sure exist. For example: We might read some news report that a valuable archaeological find was destroyed in a distant country, yet according to other news reports, the story was made up, the crime did not really occur, and the item allegedly involved never even existed. People even get emotionally attached to the fate of a fictional character.
An example of acausal trade with simple resource requirements
At its most abstract, the agents are simply optimization algorithms. As a toy example, let T be a utility function for which time is most valuable as a resource; while for utility function S, space is most valuable, and for the purpose of our toy model, assume that these are the only two possible resources.
We will now choose the best algorithms for optimizing T and S. To avoid anthropomorphizing and asking what the agents will decide to do, we simply ask which algorithm--which source code--would give the highest expected utility for a given utility function. Thus, the choice of source code is "timeless": We treat it as an optimization problem across all possible strings of source code.
We specify that there is some probability that the agent for T (and likewise for S) will be run in an environment where time is in abundance, and some probability that it will be run in a space-rich universe.
(This uncertainty is sometimes expressed in terms of multiverses, but we need only to use a typical decision framework in which the probability is multiplied by utility to calculated expected value.)
If the algorithm for T is instantiated in a space-rich environment, it will only be able to gain a small amount of utility for itself, but S would be able to gain a lot of utility; and vice versa.
The question is what algorithm for T, and respectively for S, provides the most optimization power, the highest expected value for the agent itself.
If it turns out that the environment is space-rich, the agent for T may run the agent (the algorithm) for S, increasing the utility for S, and symmetrically the reverse. This will happen if each of these can prove, perhaps probabilistically, that the optimum occurs when the other agent has the "trading" feature, and so will in fact have that feature.
Acausal trade with complex resource requirements
In the toy example above, resource requirements are very simple: Time and space. In general, given that agents can have complex and arbitrary goals requiring a complex mix of resources, an agent would not be able to conclude that a specific trading partner exists and will acausally cooperate.
However, an agent can analyze the distribution of probabilities for the existence of other agents, and weight its actions accordingly. It will do acausal "favors" for one or more trading partners, weighting its effort accoridng to its subjective probability that the partner will exist and acausally reciprocate. (Alternatively, this can be described as variants of an agent across a multiverse doing different "favors" for different acausal trading partners who are likewise distributed across a multiverse.) The expectation on utility given and received will come into a good enough balance to benefit the traders, so long as the probabilistic sampling reflects an accurate distribution--as will happen in the limiting case of increasing super-intelligence.
Ordinary trade is usually analyzed causally: The two parties can have causal influence on each other when they share information, threaten retribution, promise payment, and bind themselves legally. Then, when the trade is carried out as promised, or else one or more parties defect, they can give rewards or carry out threats.
Even ordinary scenarios can be analyzed acausally, using a perspective similar to that of Updateless decision theory We ask: Which algorithm should an agent have to get the best expected value, summing across all possible environments weighted by their probability? The possible environments include those in which threats and promises have been made.
Acausal trade may be impossible under certain conditions:
- One of the agents is "stupid" enough not to trade; In Prisoner's Dilemma terms, it will defect. (This is stupidity, as it will result in an outcome inferior to acausal trade.) If the other agent is smart enough to realize this, it too will not carry out the trade and will defect.
- One of the agents is so lacking in resources compared to the other that it can acausally trade nothing that is more valuable than transaction costs.
- "AI deterrence"
- "The AI in a box boxes you"
- A story that shows acausal trade in action.
- "Hail Mary, Value Porosity, and Utility Diversification," Nick Bostrom, the first paper from academia to rely on the concept of acausal trade.
- Towards an idealized decision theory, by Nate Soares and Benja Fallenstein discusses acausal interaction scenarios that shed light on new directions in decision theory.
- Hofstadter's Superrationality essays, published in Metamagical Themas (LW discussion)
- Jaan Tallinn, Why Now? A Quest in Metaphysics.
- Gary Drescher, Good and Real, MIT Press, 1996.