Difference between revisions of "Singularity paper"

From Lesswrongwiki
Jump to: navigation, search
(abstract and organization; more to come)
Line 1: Line 1:
 
A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .
 
A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .
  
EY's arguments from Overcoming Bias are linked in [[Intelligence explosion]].
+
== Basic structure ==
  
== Basic structure ==
+
=== '''Introduction'''===
  
* '''Introduction'''
+
=====Abstract=====
-The classics:
+
The creation of a artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of ''intelligence'', or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much vaster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an ''intelligence explosion'', vastly increasing its capability to far beyond the human level on a very fast timescale.
**http://singinst.org/overview/whatisthesingularity
 
**http://singinst.org/overview/whyworktowardthesingularity
 
  
This is worth including:
 
**http://yudkowsky.net/singularity/schools
 
  
  
Line 19: Line 15:
 
-effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements
 
-effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements
  
-This will be especially helpful:
 
**http://singinst.org/upload/LOGI/seedAI.html
 
  
-This may be worth including:
 
**http://yudkowsky.net/singularity/power
 
  
  
<s>* '''The question of FAI'''
+
* '''Conclusions'''
-These are important:
+
* '''References'''
**http://yudkowsky.net/singularity/aibox
+
 
**http://sl4.org/wiki/CoherentExtrapolatedVolition</s> ([http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/19lq deemed] beyond this paper's scope)
 
  
 +
== References ==
 +
List here references to work that can be used for building the argument.
  
* '''Conclusions'''
+
===Works by Yudkowsky===
* '''References'''
 
  
== Posts to be covered ==
+
====Less Wrong posts====
  
List here particular posts of note whose content should be included.
 
  
<s>* Magical categories</s> (relates to FAI, not hard takeoff)
+
====Other Yudkowsky material====
 +
*[http://intelligence.org/overview/whatisthesingularity What is the Singularity?]
 +
*[http://intelligence.org/overview/whyworktowardthesingularity Why work toward the Singularity?]
 +
*[http://yudkowsky.net/singularity/schools Three Singularity Schools]
 +
*[http://intelligence.org/upload/LOGI/seedAI.html LOGI part 3, Seed AI]
 +
*[http://yudkowsky.net/singularity/power The Power of Intelligence]
  
== References ==
+
===Other sources and references===
List here references to peer-reviewed work that can be used for building the argument.
 
  
 
See [http://intelligence.org/blog/2009/01/27/writings-about-friendly-ai/ list here ] and also [http://www.acceleratingfuture.com/michael/blog/2006/09/consolidation-of-links-on-friendly-ai/ this]
 
See [http://intelligence.org/blog/2009/01/27/writings-about-friendly-ai/ list here ] and also [http://www.acceleratingfuture.com/michael/blog/2006/09/consolidation-of-links-on-friendly-ai/ this]
 
Though any source deserves its citation, the published items are:
 
  
 
* Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
 
* Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.

Revision as of 08:58, 21 November 2009

A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .

Basic structure

Introduction

Abstract

The creation of a artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of intelligence, or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much vaster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an intelligence explosion, vastly increasing its capability to far beyond the human level on a very fast timescale.


  • The question of FOOM

-improvement up to human-comparable programming ability -improvement at human-comparable programming ability and beyond -effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements



  • Conclusions
  • References


References

List here references to work that can be used for building the argument.

Works by Yudkowsky

Less Wrong posts

Other Yudkowsky material

Other sources and references

See list here and also this

  • Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
  • Ben Goertzel, “Thoughts on AI Morality,” Dynamical Psychology, 2002.
  • Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
  • Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
  • Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
  • J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
  • Steve Omohundro, “The Basic AI Drives”, Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
  • EY's chapters in Global Catastrophic Risks.
  • E-CAP 2009 talks by Sotala and Shulman
  • Artificial General Intelligence (Cognitive Technologies) Ben Goertzel, Cassio Pennachin, et al