Updateless decision theory
Updateless Decision Theory (UDT) is a new decision theory meant to deal with a fundamental problem in the existing decision theories: The need to treat the agent as a part of the world in which it makes its decisions. In contrast, in the most common decision theory today, Causal Decision Theory (CDT), the deciding agent is not part of the world model--its decision is is the output of the CDT, but the agent's decision is "magic": it does not result from anything in the modeled world. The decision is It is uncaused, like a dualist free-will theory with non-material souls.
Getting this issue right is critical in building a self-improving artificial general intelligence as such an AI must analyze its own behavior and that of a next generation that it may build.
UDT specifies that the optimal agent is the one with the best algorithm--the best mapping from observations to actions--across a probability distribution of all world-histories. ("Best" here, as in other decision theories, means one that maximizes a utility/reward function.)
This definition may seem trivial, but in contrast, CDT says that an agent should choose the best option at any given moment based on the effects of that action. As in Judea Pearl's definition of causality, CDT "cuts" (ignores) any causal links inbound to the decider, treating this agent as as an uncaused cause. The agent is unconcerned about what evidence its decision may provide about the agent's own mental makeup--evidence which may suggest that the agent will make suboptimal decisions in other cases.
Evidential Decision Theory is the other leading decision theory today. It says that the agent should make the choice for which the expected utility, as calculated with Bayes' theorem, is the highest. EDT avoids CDT's pitfalls, but has its own flaw: It ignores distinction between causation and correlation. In CDT, the agent is an uncaused cause, and in EDT, the converse: It is caused by not a cause. A valuable aspect of EDT is reflected in "UDT 1.1" (see article by McAllister in references), a variant of UDT in which the agent takes into account that some of its algorithm (mapping from observations to actions) may be prespecified and not entirely in its control, so that it has to gather evidence and draw conclusions about this prespecified part of its own mental makeup.
A robust theory of logical uncertainty is essential to a full formalization of UDT. A UDT agent must calculate probabilities and expected values on the outcome of its possible actions in all possible scenarios (observations). However, it does not know its own actions. (The whole point is to derive its actions.) On the other hand, it does have some knowledge about its actions, just as you know that you are unlikely to walk straight into a wall the next chance you get. It models itself as an algorithm, and its probability distribution about what it itself will do is an important input into the UDT calculation (Logical uncertainty is an area which has not yet been properly formalized, and much UDT research is focused on this area.)
- Indexical uncertainty and the Axiom of Independence by Wei Dai
- Towards a New Decision Theory by Wei Dai
- Anthropic Reasoning in UDT by Wei Dai
- The Absent-Minded Driver by Wei Dai
- Why (and why not) Bayesian Updating? by Wei Dai
- What Are Probabilities, Anyway? by Wei Dai
- Explicit Optimization of Global Strategy (Fixing a Bug in UDT1) by Wei Dai
- List of Problems That Motivated UDT by Wei Dai
- Another attempt to explain UDT by cousin_it
- What is Wei Dai's Updateless Decision Theory? by Tyrrell McAllister
- All posts tagged "UDT"
In addition to whole posts on UDT, there are also a number of comments which contain important information, often on less relevant posts.
- Formal description of UDT by Tyrrell McAllister
- UDT with known search order by Tsvi Benson-Tilsen
- Problem Class Dominance in Predictive Dilemmas, section 3.4. (The best summary to date.)
- An introduction to decision theory (series of posts)