Difference between revisions of "Intelligence explosion"

From Lesswrongwiki
Jump to: navigation, search
Line 5: Line 5:
 
'''Intelligence explosion''' is the idea of a positive feedback loop in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. Depending on the mechanism underlying the feedback loop, the transition may take years or hours (intelligence explosion doesn't necessarily mean recursively self-improving AIs, other options include brain-computer interfaces or even genetic engineering). At some point technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.
 
'''Intelligence explosion''' is the idea of a positive feedback loop in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. Depending on the mechanism underlying the feedback loop, the transition may take years or hours (intelligence explosion doesn't necessarily mean recursively self-improving AIs, other options include brain-computer interfaces or even genetic engineering). At some point technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.
  
To pick just one possibility for illustrative purposes, an AI undergoing a [[hard takeoff]] might invent molecular nanotechnology, use the internet to gain physical manipulators, deploy the nanotech, use it to expand its computational capabilities and reach [[singleton]] status within a matter of weeks. Recursive self-improvement would be a genuinely new phenomenon on Earth. Humans study, and human societies accumulate new technologies and ways of doing things, but we don't directly redesign our brains. A cleanly-designed AI could redesign itself, and reap the benefits of recursive self-improvement.
+
To pick just one possibility for illustrative purposes, an AI undergoing a [[hard takeoff]] might invent molecular nanotechnology, use the internet to gain physical manipulators, deploy the nanotech, use it to expand its computational capabilities and reach [[singleton]] status within a matter of weeks. This sort of rapid recursive self-improvement was given the colloquial term "FOOM" in [[The Hanson-Yudkowsky AI-Foom Debate]].  Recursive self-improvement would be a genuinely new phenomenon on Earth. Humans study, and human societies accumulate new technologies and ways of doing things, but we don't directly redesign our brains. A cleanly-designed AI could redesign itself, and reap the benefits of recursive self-improvement.
  
 
==Blog posts==
 
==Blog posts==

Revision as of 12:05, 29 June 2011

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind.

Intelligence explosion is the idea of a positive feedback loop in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. Depending on the mechanism underlying the feedback loop, the transition may take years or hours (intelligence explosion doesn't necessarily mean recursively self-improving AIs, other options include brain-computer interfaces or even genetic engineering). At some point technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.

To pick just one possibility for illustrative purposes, an AI undergoing a hard takeoff might invent molecular nanotechnology, use the internet to gain physical manipulators, deploy the nanotech, use it to expand its computational capabilities and reach singleton status within a matter of weeks. This sort of rapid recursive self-improvement was given the colloquial term "FOOM" in The Hanson-Yudkowsky AI-Foom Debate. Recursive self-improvement would be a genuinely new phenomenon on Earth. Humans study, and human societies accumulate new technologies and ways of doing things, but we don't directly redesign our brains. A cleanly-designed AI could redesign itself, and reap the benefits of recursive self-improvement.

Blog posts

See also

Sequences