Singularity paper

From Lesswrongwiki
Revision as of 08:15, 21 November 2009 by Komponisto (talk | contribs) (Abstract)
Jump to: navigation, search

A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .

Basic structure

Introduction

Abstract

The creation of an artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of intelligence, or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much faster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an intelligence explosion, vastly increasing its capability to far beyond the human level on a very fast timescale.


  • The question of FOOM

-improvement up to human-comparable programming ability -improvement at human-comparable programming ability and beyond -effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements



  • Conclusions
  • References

References

List here references to work that can be used for building the argument.

Works by Yudkowsky

Less Wrong posts

Other Yudkowsky material

Other sources and references

See list here and also this

  • Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
  • Ben Goertzel, “Thoughts on AI Morality,” Dynamical Psychology, 2002.
  • Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
  • Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
  • Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
  • J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
  • Steve Omohundro, “The Basic AI Drives”, Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
  • EY's chapters in Global Catastrophic Risks.
  • E-CAP 2009 talks by Sotala and Shulman
  • Artificial General Intelligence (Cognitive Technologies) Ben Goertzel, Cassio Pennachin, et al