Difference between revisions of "Orthogonality thesis"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "The '''orthogonality thesis''' states that an artificial intelligence can have any combination of intelligence level and goal. This is in contrast to the belief that, because of ...")
 
Line 3: Line 3:
 
It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. One reason many assume superintelligences to converge to the same goals may be because [[Human universal|most humans]] have similar values. Furthermore, many philosophies hold that there is a rationally correct morality.
 
It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. One reason many assume superintelligences to converge to the same goals may be because [[Human universal|most humans]] have similar values. Furthermore, many philosophies hold that there is a rationally correct morality.
  
Stuart Armstrong points out that for formalizations of AI such as [[AIXI]] and [[Gödel machines]], the thesis is known to be true. Furthermore, if the thesis was false, then [[Oracle AI|Oracle AIs]] would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.
+
Stuart Armstrong points out that for formalizations of AI such as [[AIXI]] and [[Gödel machine|Gödel machines]], the thesis is known to be true. Furthermore, if the thesis was false, then [[Oracle AI|Oracle AIs]] would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.
  
 
==Pathological Cases==
 
==Pathological Cases==

Revision as of 23:13, 17 June 2012

The orthogonality thesis states that an artificial intelligence can have any combination of intelligence level and goal. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal. The thesis was originally defined by Nick Bostrom in the paper "Superintelligent Will", (along with the instrumental convergence thesis). For his purposes Bostrom defines intelligence to be instrumental rationality.

It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. One reason many assume superintelligences to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality.

Stuart Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.

Pathological Cases

There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit degree of intelligence of the AI.

Blog Posts

See Also

External Links