# Difference between revisions of "AIXI"

m (→References) |
m (→References) |
||

Line 10: | Line 10: | ||

==References== | ==References== | ||

− | * [http://www.hutter1.net/ai/aixigentle.htm M. Hutter (2010) Universal Algorithmic Intelligence: A mathematical top->down approach]. In Goertzel & Pennachin (eds.), Artificial General Intelligence | + | * [http://www.hutter1.net/ai/aixigentle.htm M. Hutter (2010) Universal Algorithmic Intelligence: A mathematical top->down approach]. In Goertzel & Pennachin (eds.), Artificial General Intelligence, 227-287. Berlin: Springer. |

+ | * M. Hutter, (2005) Universal Artificial Intelligence: Sequential decisions based on algorithmic probability]. Berlin: Springer. | ||

*[http://www.jair.org/media/3125/live-3125-5397-jair.pdf J. Veness, K.S. Ng, M. Hutter, W. Uther and D. Silver (2011) A Monte-Carlo AIXI Approximation], *Journal of Artiﬁcial Intelligence Research* 40, 95-142] | *[http://www.jair.org/media/3125/live-3125-5397-jair.pdf J. Veness, K.S. Ng, M. Hutter, W. Uther and D. Silver (2011) A Monte-Carlo AIXI Approximation], *Journal of Artiﬁcial Intelligence Research* 40, 95-142] | ||

## Revision as of 06:01, 22 August 2012

AIXI is an algorithm for a maximally intelligent agent, developed by Marcus Hutter.

AIXI is provably more intelligent than any other possible agent. However, it is not a feasible AI, as it uses Kolmogorov complexity and Solomonoff induction, which are not computable, and also evaluates expected value over an infinite set of possible actions and environments in each iteration of action. Thus, it serves not as a design for a real AI, but rather as a theoretical model of intelligence, abstracting away resource limitations that limit the intelligence of and complicate the analysis of real-world AI.

AIXI has also served to inspire a computable variant, AIXItl, which is provably more intelligent within time and space constraints than any other agent with the same constraints. AIXItl too is intractable, but implementable variants such as the Monte Carlo approximation by Veness et al. have shown promising results in simple general-intelligence test problems.

Eliezer Yudkowsky and others have pointed out that AIXI lacks a self-model: It extrapolates its own actions into the future indefinitely, on the assumption that it will keep working in the same way in the future. Though AIXI is an abstraction, any real AI would have a physical embodiment that could be damaged and an implementation which could change its own behavior due to bugs, and the AIXI formalism completely ignores these possibilities.

## References

- M. Hutter (2010) Universal Algorithmic Intelligence: A mathematical top->down approach. In Goertzel & Pennachin (eds.), Artificial General Intelligence, 227-287. Berlin: Springer.
- M. Hutter, (2005) Universal Artificial Intelligence: Sequential decisions based on algorithmic probability]. Berlin: Springer.
- J. Veness, K.S. Ng, M. Hutter, W. Uther and D. Silver (2011) A Monte-Carlo AIXI Approximation, *Journal of Artiﬁcial Intelligence Research* 40, 95-142]

## Blog posts

- AIXI and Existential Despair by paulfchristiano
- [video] Paul Christiano's impromptu tutorial on AIXI and TDT