A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .
The creation of an artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of intelligence, or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much faster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an intelligence explosion, vastly increasing its capability to far beyond the human level on a very fast timescale.
- The question of FOOM
-improvement up to human-comparable programming ability -improvement at human-comparable programming ability and beyond -effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements
List here references to work that can be used for building the argument.
Works by Yudkowsky
Less Wrong posts
- Optimization and the Singularity
- Cascades, Cycles, Insight ...
- ... Recursion, Magic
- Englebart: Insufficiently Recursive
- Total Nano Domination
- Singletons Rule OK
- Recursive Self-improvement
- Hard Takeoff
- Permitted Possibilities, & Locality
- Sustained Strong Recursion
- Disjunctions, Antipredictions, &c.
- What I Think, If Not Why
Other Yudkowsky material
- What Is the Singularity?
- Why Work Toward the Singularity?
- Three Singularity Schools
- LOGI part 3, Seed AI
- The Power of Intelligence
- Cognitive Biases Potentially Affecting Judgement of Global Risks, in Global Catastrophic Risks
- Artificial Intelligence as a Positive and Negative Factor in Global Risk, in Global Catastrophic Risks
- Why We Need Friendly AI
Other sources and references
- Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
- Ben Goertzel, “Thoughts on AI Morality,” Dynamical Psychology, 2002.
- Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
- Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
- I. J. Good, "Speculations Concerning the First Ultraintelligent Machine"
- Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
- J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
- Steve Omohundro, “The Basic AI Drives”, Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
- Artificial General Intelligence (Cognitive Technologies) Ben Goertzel, Cassio Pennachin, et al
- Carl Shulman, Henrik Jonsson, and Nick Tarleton, "Machine Ethics and Superintelligence", APCAP09
- Carl Shulman, Henrik Jonsson, and Nick Tarleton "Which Consequentialism? Machine Ethics and Moral Divergence", APCAP09
- Vernor Vinge, "The Coming Technological Singularity"