Difference between revisions of "Singularity paper"

From Lesswrongwiki
Jump to: navigation, search
m (Other sources and references: links and APCAP)
m (Limitations of human intelligence, ways in which AI could be more powerful and faster)
 
(16 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .
 
A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .
 +
 +
Also see: [http://www.imminst.org/forum/index.php?showtopic=35318&hl= Intelligence Explosion]
  
 
== Basic structure ==
 
== Basic structure ==
 
=== '''Introduction'''===
 
  
 
=====Abstract=====
 
=====Abstract=====
 
The creation of an artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of ''intelligence'', or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much faster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an ''intelligence explosion'', vastly increasing its capability to far beyond the human level on a very fast timescale.
 
The creation of an artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of ''intelligence'', or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much faster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an ''intelligence explosion'', vastly increasing its capability to far beyond the human level on a very fast timescale.
  
 +
===The Power of Intelligence===
 +
Human civilization has had a massive impact on this planet. Humans have developed language, sophisticated tools, and advanced technology, by virtue of our general intelligence: human individuals have the ability to learn and invent new skills and ''teach'' them to other humans, as opposed to new adaptations gradually developing by natural selection. We are cultural animals, and over time, and especially since the scientific and industrial revolutions, our culture has developed more and more complex knowledge and a deeper division of labor. Once natural selection created humans with the general intelligence to sustain culture, culture took over and operates at a faster timescale than evolution.
  
 +
===Limitations of human intelligence, ways in which AI could be more powerful and faster===
 +
As great of an impact as human intelligence has had, there are specific reasons to think it would be greatly surpassed by AI. Almost all our advances over the past thousands of years have been cultural; we haven't had time to evolve new brain architecture (cite evopsych paper? ''The Adapted Mind''?). Evolution didn't design our brains to build civilizations; it's just sort of something that happened. We can also point to specific ways in which the human brain is lacking, where our intuitive judgement goes horribly wrong (cite heuristics and biases).
  
 
* '''The question of FOOM'''
 
* '''The question of FOOM'''
Line 14: Line 18:
 
-improvement at human-comparable programming ability and beyond
 
-improvement at human-comparable programming ability and beyond
 
-effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements
 
-effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements
 
 
 
  
 
* '''Conclusions'''
 
* '''Conclusions'''
Line 23: Line 24:
 
== References ==
 
== References ==
 
List here references to work that can be used for building the argument.
 
List here references to work that can be used for building the argument.
 +
 +
[http://www.springer.com/computer/artificial/journal/11023?detailsPage=contentItemPage&CIPageCounter=142553 ''Minds and Machines'' submission guidelines]
  
 
===Works by Yudkowsky===
 
===Works by Yudkowsky===
Line 29: Line 32:
  
 
*[http://lesswrong.com/lw/rk/optimization_and_the_singularity/ Optimization and the Singularity]
 
*[http://lesswrong.com/lw/rk/optimization_and_the_singularity/ Optimization and the Singularity]
*[http://lesswrong.com/lw/w5/cascades_cycles_insight/ Cascades, Cycles, Insight ...]
+
*[http://lesswrong.com/lw/w5/cascades_cycles_insight/ Cascades, Cycles, Insight...]
*[http://lesswrong.com/lw/w6/recursion_magic/ ... Recursion, Magic]
+
*[http://lesswrong.com/lw/w6/recursion_magic/ ...Recursion, Magic]
*[http://lesswrong.com/lw/w8/engelbart_insufficiently_recursive/ Englebart: Insufficiently Recursive]
+
*[http://lesswrong.com/lw/w8/engelbart_insufficiently_recursive/ Engelbart: Insufficiently Recursive]
 
*[http://lesswrong.com/lw/w9/total_nano_domination/ Total Nano Domination]
 
*[http://lesswrong.com/lw/w9/total_nano_domination/ Total Nano Domination]
 
*[http://lesswrong.com/lw/wc/singletons_rule_ok/ Singletons Rule OK]
 
*[http://lesswrong.com/lw/wc/singletons_rule_ok/ Singletons Rule OK]
*'''''[http://lesswrong.com/lw/we/recursive_selfimprovement/ Recursive Self-improvement]'''''
+
*'''''[http://lesswrong.com/lw/we/recursive_selfimprovement/ Recursive Self-Improvement]'''''
 
*'''''[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff]'''''
 
*'''''[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff]'''''
 
*'''''[http://lesswrong.com/lw/wg/permitted_possibilities_locality/ Permitted Possibilities, & Locality]'''''
 
*'''''[http://lesswrong.com/lw/wg/permitted_possibilities_locality/ Permitted Possibilities, & Locality]'''''
 
*'''''[http://lesswrong.com/lw/wi/sustained_strong_recursion/ Sustained Strong Recursion]'''''
 
*'''''[http://lesswrong.com/lw/wi/sustained_strong_recursion/ Sustained Strong Recursion]'''''
*'''''[http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/ Disjunctions, Antipredictions, &c.]'''''
+
*'''''[http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/ Disjunctions, Antipredictions, Etc.]'''''
 
*'''''[http://lesswrong.com/lw/wp/what_i_think_if_not_why/ What I Think, If Not Why]'''''
 
*'''''[http://lesswrong.com/lw/wp/what_i_think_if_not_why/ What I Think, If Not Why]'''''
  
Line 47: Line 50:
 
*[http://intelligence.org/upload/LOGI/seedAI.html LOGI part 3, Seed AI]
 
*[http://intelligence.org/upload/LOGI/seedAI.html LOGI part 3, Seed AI]
 
*[http://yudkowsky.net/singularity/power The Power of Intelligence]
 
*[http://yudkowsky.net/singularity/power The Power of Intelligence]
*[http://intelligence.org/upload/cognitive-biases.pdf Cognitive Biases Potentially Affecting Judgement of Global Risks], in ''Global Catastrophic Risks''
+
*[http://intelligence.org/upload/cognitive-biases.pdf Cognitive Biases Potentially Affecting Judgement of Global Risks] (PDF), in ''Global Catastrophic Risks''
*[http://intelligence.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk], in ''Global Catastrophic Risks''
+
*[http://intelligence.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] (PDF), in ''Global Catastrophic Risks''
 
*[http://www.preventingskynet.com/why-we-need-friendly-ai/ Why We Need Friendly AI]
 
*[http://www.preventingskynet.com/why-we-need-friendly-ai/ Why We Need Friendly AI]
  
Line 58: Line 61:
 
* Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
 
* Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
 
* Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
 
* Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
 +
* I. J. Good, "[http://www.acceleratingfuture.com/ultraintelligentmachine.html Speculations Concerning the First Ultraintelligent Machine]"
 
* Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
 
* Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
 
* J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
 
* J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
Line 63: Line 67:
  
 
* Artificial General Intelligence (Cognitive Technologies) Ben Goertzel, Cassio Pennachin, et al
 
* Artificial General Intelligence (Cognitive Technologies) Ben Goertzel, Cassio Pennachin, et al
* Carl Shulman, Henrik Jonsson, and Nick Tarleton, "[http://bentham.k2.t.u-tokyo.ac.jp/ap-cap09/openconf/data/papers/28-2.pdf Machine Ethics and Superintelligence]", APCAP09
+
* Carl Shulman, Henrik Jonsson, and Nick Tarleton, "[http://bentham.k2.t.u-tokyo.ac.jp/ap-cap09/openconf/data/papers/28-2.pdf Machine Ethics and Superintelligence]" (PDF), APCAP09
* Carl Shulman, Henrik Jonsson, and Nick Tarleton "[http://bentham.k2.t.u-tokyo.ac.jp/ap-cap09/openconf/data/papers/33.pdf Which Consequentialism? Machine Ethics and Moral Divergence]", APCAP09
+
* Kaj Sotala, "[http://www.xuenay.net/ECAP2009.pdf Evolved altruism, ethical complexity, anthropomorphic trust: three factors misleading estimates of the safety of artificial general intelligence]," (PDF) ECAP09
 +
* Carl Shulman, Henrik Jonsson, and Nick Tarleton "[http://bentham.k2.t.u-tokyo.ac.jp/ap-cap09/openconf/data/papers/33.pdf Which Consequentialism? Machine Ethics and Moral Divergence]" (PDF), APCAP09
 +
* Carl Shulman, "Arms Control and Intelligence Explosions," ECAP09
 +
* Vernor Vinge, "[http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html The Coming Technological Singularity]"
 +
* Joel Veness ''et al.'' "[http://arxiv.org/abs/0909.0801 A Monte Carlo AIXI Approximation]" ([http://www.vetta.org/2009/09/monte-carlo-aixi/ Hat tip Shane Legg])

Latest revision as of 16:47, 6 March 2013

A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .

Also see: Intelligence Explosion

Basic structure

Abstract

The creation of an artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of intelligence, or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much faster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an intelligence explosion, vastly increasing its capability to far beyond the human level on a very fast timescale.

The Power of Intelligence

Human civilization has had a massive impact on this planet. Humans have developed language, sophisticated tools, and advanced technology, by virtue of our general intelligence: human individuals have the ability to learn and invent new skills and teach them to other humans, as opposed to new adaptations gradually developing by natural selection. We are cultural animals, and over time, and especially since the scientific and industrial revolutions, our culture has developed more and more complex knowledge and a deeper division of labor. Once natural selection created humans with the general intelligence to sustain culture, culture took over and operates at a faster timescale than evolution.

Limitations of human intelligence, ways in which AI could be more powerful and faster

As great of an impact as human intelligence has had, there are specific reasons to think it would be greatly surpassed by AI. Almost all our advances over the past thousands of years have been cultural; we haven't had time to evolve new brain architecture (cite evopsych paper? The Adapted Mind?). Evolution didn't design our brains to build civilizations; it's just sort of something that happened. We can also point to specific ways in which the human brain is lacking, where our intuitive judgement goes horribly wrong (cite heuristics and biases).

  • The question of FOOM

-improvement up to human-comparable programming ability -improvement at human-comparable programming ability and beyond -effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements

  • Conclusions
  • References

References

List here references to work that can be used for building the argument.

Minds and Machines submission guidelines

Works by Yudkowsky

Less Wrong posts

Other Yudkowsky material

Other sources and references

See list here and also this

  • Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
  • Ben Goertzel, “Thoughts on AI Morality,” Dynamical Psychology, 2002.
  • Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
  • Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
  • I. J. Good, "Speculations Concerning the First Ultraintelligent Machine"
  • Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
  • J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
  • Steve Omohundro, “The Basic AI Drives”, Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.