Difference between revisions of "Intelligence explosion"

From Lesswrongwiki
Jump to: navigation, search
m
Line 6: Line 6:
 
'''Intelligence explosion''' is the idea of a positive feedback loop in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. Depending on the mechanism underlying the feedback loop, the transition may take years or hours (intelligence explosion doesn't necessarily mean recursively self-improving AIs, other options include brain-computer interfaces or even genetic engineering). At some point technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.
 
'''Intelligence explosion''' is the idea of a positive feedback loop in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. Depending on the mechanism underlying the feedback loop, the transition may take years or hours (intelligence explosion doesn't necessarily mean recursively self-improving AIs, other options include brain-computer interfaces or even genetic engineering). At some point technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.
  
To pick just one possibility for illustrative purposes, an AI undergoing a [[hard takeoff]] might invent molecular nanotechnology, use the internet to gain physical manipulators, deploy the nanotech, use it to expand its computational capabilities and reach [[singleton]] status within a matter of weeks. This sort of rapid recursive self-improvement was given the colloquial term "FOOM" in [[The Hanson-Yudkowsky AI-Foom Debate]].
+
The following is a common example of a possible path for an AI to bring about an intelligence explosion. First, the AI is smart enough to conclude that inventing molecular nanotechnology will be of greatest benefit to it. Its first act of recursive self-improvement is to gain access to other computers over the internet. This extra computational ability increases the depth and breadth of its search processes. It then uses gained knowledge of material physics and a distributed computing program to invent the first general assembler nanomachine. Then it uses some manufacturing technology, accessible from the internet, to build and deploy the nanotech. It programs the nanotech to turn a large section of bedrock into a supercomputer. This is its second act of recursive self-improvement, only possible because of the first. Then it could use this enormous computing power to consider hundreds of alternative decision algorithms, better computing structures, et cetera.  
  
Recursive self-improvement would be a genuinely new phenomenon on Earth. Humans study, and human societies accumulate new technologies and ways of doing things, but we don't directly redesign our brains. A cleanly-designed AI could redesign itself, and reap the benefits of recursive self-improvement.
+
Philosopher David Chalmers published a significant analysis of the Singularity, focusing on intelligence explosions, in ''Journal of Consciousness Studies''. His analysis of how they could occur defends the likelihood of an intelligence explosion. He also discusses the nature of general intelligence, and possible obstacles to a singularity. A good deal of discussion is given to the dangers of an intelligence explosion, and Chalmers concludes that we must negotiate it very carefully by building the correct values into the initial AIs.
 +
 
 +
Luke Muehlhauser ([http://lesswrong.com/user/lukeprog lukeprog]) and Anna Salamon ([http://lesswrong.com/user/AnnaSalamon AnnaSalamon]) argue in ''Intelligence Explosion: Evidence and Import'' in detail that an intelligence explosion is very likely within 100 years, and extremely critical in determining the future. They trace the implications of many types of upcoming technologies, and point out the feedback loops present in them. This leads them to deduce that an above-human level AI will almost certainly lead to an intelligence explosion. They conclude with recommendations for bringing about a safe intelligence explosion.
  
 
==Blog posts==
 
==Blog posts==
Line 14: Line 16:
 
*[http://lesswrong.com/lw/w5/cascades_cycles_insight/ Cascades, Cycles, Insight...], [http://lesswrong.com/lw/w6/recursion_magic/ ...Recursion, Magic]
 
*[http://lesswrong.com/lw/w5/cascades_cycles_insight/ Cascades, Cycles, Insight...], [http://lesswrong.com/lw/w6/recursion_magic/ ...Recursion, Magic]
 
*[http://lesswrong.com/lw/we/recursive_selfimprovement/ Recursive Self-Improvement], [http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff], [http://lesswrong.com/lw/wg/permitted_possibilities_locality/ Permitted Possibilities, & Locality]
 
*[http://lesswrong.com/lw/we/recursive_selfimprovement/ Recursive Self-Improvement], [http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff], [http://lesswrong.com/lw/wg/permitted_possibilities_locality/ Permitted Possibilities, & Locality]
*[http://yudkowsky.net/singularity/schools Three Major Singularity Schools]
 
 
==External links==
 
 
*[http://intelligenceexplosion.com/ Intelligence Explosion website], a landing page for introducing the concept
 
*[http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-Machine.pdf Speculations Concerning the First Ultraintelligent Machine]
 
  
 
==See also==
 
==See also==
Line 25: Line 21:
 
*[[Technological singularity]], [[Hard takeoff]]
 
*[[Technological singularity]], [[Hard takeoff]]
 
*[[Existential risk]]
 
*[[Existential risk]]
*[[Artificial general intelligence]]
+
*[[Artificial General Intelligence]]
 
*[[Lawful intelligence]]
 
*[[Lawful intelligence]]
 
*[[The Hanson-Yudkowsky AI-Foom Debate]]
 
*[[The Hanson-Yudkowsky AI-Foom Debate]]
  
[[Category:Concepts]]
+
==External links==
 +
 
 +
*[http://intelligenceexplosion.com/ Intelligence Explosion website], a landing page for introducing the concept
 +
*[http://yudkowsky.net/singularity/schools Three Major Singularity Schools]
 +
 
 +
==References==
 +
*{{cite journal
 +
| author = Good, Irving John
 +
| editor = Franz L. Alt and Morris Rubinoff
 +
| year = 1965
 +
| title = Speculations concerning the first ultraintelligent machine
 +
| journal = Advances in computers
 +
| volume = 6
 +
| pages = 31-88
 +
| location = New York
 +
| publisher = Academic Press
 +
| doi = 10.1016/S0065-2458(08)60418-0
 +
| url = http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-Machine.pdf
 +
}}
 +
 
 +
*{{cite journal
 +
| author = David Chalmers
 +
| year = 2010
 +
| title = The Singularity: A Philosophical Analysis
 +
| journal = Journal of Consciousness Studies
 +
| volume = 17
 +
| pages = 7-65
 +
| url = http://consc.net/papers/singularity.pdf
 +
}}
 +
 
 +
*{{cite book
 +
| last1 = Muehlhauser
 +
| first1 = Luke
 +
| last2 = Salamon
 +
| first2 = Anna
 +
| contribution = Intelligence Explosion: Evidence and Import
 +
| year = 2012
 +
| title = The singularity hypothesis: A scientific and philosophical assessment
 +
| editor1-last = Eden
 +
| editor1-first = Amnon
 +
| editor2-last = Søraker
 +
| editor2-first = Johnny
 +
| editor3-last = Moor
 +
| editor3-first = James H.
 +
| editor4-last = Steinhart
 +
| editor4-first = Eric
 +
| place = Berlin
 +
| publisher = Springer
 +
| contribution-url = http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf
 +
}}
 +
 
 +
 
 
[[Category:Future]]
 
[[Category:Future]]
 
[[Category:Jargon]]
 
[[Category:Jargon]]

Revision as of 16:42, 30 June 2012

Smallwikipedialogo.png
Wikipedia has an article about

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind.

Intelligence explosion is the idea of a positive feedback loop in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. Depending on the mechanism underlying the feedback loop, the transition may take years or hours (intelligence explosion doesn't necessarily mean recursively self-improving AIs, other options include brain-computer interfaces or even genetic engineering). At some point technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.

The following is a common example of a possible path for an AI to bring about an intelligence explosion. First, the AI is smart enough to conclude that inventing molecular nanotechnology will be of greatest benefit to it. Its first act of recursive self-improvement is to gain access to other computers over the internet. This extra computational ability increases the depth and breadth of its search processes. It then uses gained knowledge of material physics and a distributed computing program to invent the first general assembler nanomachine. Then it uses some manufacturing technology, accessible from the internet, to build and deploy the nanotech. It programs the nanotech to turn a large section of bedrock into a supercomputer. This is its second act of recursive self-improvement, only possible because of the first. Then it could use this enormous computing power to consider hundreds of alternative decision algorithms, better computing structures, et cetera.

Philosopher David Chalmers published a significant analysis of the Singularity, focusing on intelligence explosions, in Journal of Consciousness Studies. His analysis of how they could occur defends the likelihood of an intelligence explosion. He also discusses the nature of general intelligence, and possible obstacles to a singularity. A good deal of discussion is given to the dangers of an intelligence explosion, and Chalmers concludes that we must negotiate it very carefully by building the correct values into the initial AIs.

Luke Muehlhauser (lukeprog) and Anna Salamon (AnnaSalamon) argue in Intelligence Explosion: Evidence and Import in detail that an intelligence explosion is very likely within 100 years, and extremely critical in determining the future. They trace the implications of many types of upcoming technologies, and point out the feedback loops present in them. This leads them to deduce that an above-human level AI will almost certainly lead to an intelligence explosion. They conclude with recommendations for bringing about a safe intelligence explosion.

Blog posts

See also

External links

References

  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.