Back to LessWrong

Extensibility argument for greater-than-human intelligence

From Lesswrongwiki

Jump to: navigation, search

An extensibility argument for greater-than-human intelligence argues that once we get to a human level AGI, extensibility would make an AGI of greater-than-human-intelligence feasible. It is identified by David Chalmers as one of the main premises for the singularity and intelligence explosion hypothesis [1]. One intuitive ground for this argument is that information technologies have always presented a continuous development towards greater computational capacity. Chalmers presents the argument as below:

  • (i) If there is AI, AI will be produced by an extendible method.
  • (ii) If AI is produced by an extendible method, we will have the capacity to extend the method (soon after).
  • (iii) Extending the method that produces an AI will yield an AI+.

—————-

  • (iv) Absent defeaters, if there is AI, there will (soon after) be AI+”[1]

He says premises I and II follow directly from most definitions of ‘extendible method’: a method that can easily be improved, yielding more intelligent systems. One possible extendible method would be programming an AGI, since all known software seems improvable. One known non-extendible method is biological reproduction; it produces human level intelligence and nothing more. Also, there could be methods of achieving greater than human intelligence without creating human level AGI, for example through Biological Cognitive Enhancement or genetic engineering. If the resulting greater than human intelligence is also extendible and so would the next levels of intelligence, then an intelligence explosion would follow. It could be argued that we are at a ceiling or for intelligence that is very hard to surpass, but there seems to have little to no basis for this.

Luke Muehlhauser and Anna Salamon [2] list several features of an artificial human level intelligence that suggest it would be easily extendible: increased computational resources, increased communication speed, increased serial depth, duplicability, editability, goal coordination and improved rationality. They also agree with Omohundro [3][4][5]and Bostrom[6] that most of advanced intelligence would have the instrumental goal of increasing its own intelligence since this would help achieving almost any other goal.

See Also

References

  1. 1.0 1.1 Chalmers, David. (2010) "The Singularity: A Philosophical Analysis, Journal of Consciousness Studies", 17 (9-10), pp. 7-65. http://consc.net/papers/singularity.pdf
  2. Muehlhauser, Luke & Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import." In Singularity Hypotheses. Springer. http://singularity.org/files/IE-EI.pdf
  3. Omohundro, Stephen M. 2007. The nature of self-improving artificial intelligence. Paper presented at Singularity Summit 2007, San Francisco, CA, Sept. 8–9. http://singularity.org/summit2007/overview/abstracts/#omohundro.
  4. Omohundro, Stephen M. 2008. The basic AI drives. In Wang, Goertzel, and Franklin 2008, 483–492.
  5. Omohundro, Stephen M. (Forthcoming) "Rational artificial intelligence for the greater good." In Eden, Søraker, Moor, and Steinhart, forthcoming
  6. Bostrom, Nick. (2012) "The superintelligent will: Motivation and instrumental rationality in advanced artificial agents." In Theory and philosophy of AI, ed. Vincent C. Müller. Special issue, Minds and Machines 22 (2): 71–85. doi:10.1007/s11023-012-9281-3. http://www.nickbostrom.com/superintelligentwill.pdf