A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .
EY's arguments from Overcoming Bias are linked in Intelligence explosion.
This is worth including:
- The question of FOOM
-improvement up to human-comparable programming ability -improvement at human-comparable programming ability and beyond -effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements
-This will be especially helpful:
-This may be worth including:
- The question of FAI
-These are important:
Posts to be covered
List here particular posts of note whose content should be included.
- Magical categories
List here references to peer-reviewed work that can be used for building the argument.
Though any source deserves its citation, the published items are:
- Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
- Ben Goertzel, “Thoughts on AI Morality,” Dynamical Psychology, 2002.
- Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
- Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
- Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
- J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
- Steve Omohundro, “The Basic AI Drives”, Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
- EY's chapters in Global Catastrophic Risks.