Difference between revisions of "Intelligence explosion"

From Lesswrongwiki
Jump to: navigation, search
m (See also)
Line 9: Line 9:
 
*[[Technological singularity]]
 
*[[Technological singularity]]
 
*[[Artificial general intelligence]]
 
*[[Artificial general intelligence]]
 +
*[[Lawful intelligence]]
 +
 +
==Blog posts==
 +
 +
=====Posts by [[Eliezer Yudkowsky]]:=====
 +
 +
*[http://lesswrong.com/lw/w5/cascades_cycles_insight/ Cascades, Cycles, Insight...], [http://lesswrong.com/lw/w6/recursion_magic/ ...Recursion, Magic]
 +
*[http://lesswrong.com/lw/we/recursive_selfimprovement/ Recursive Self-Improvement], [http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff]
  
 
{{stub}}
 
{{stub}}
 
[[Category:Concepts]]
 
[[Category:Concepts]]

Revision as of 08:15, 30 August 2009

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind.

One of what Eliezer Yudkowsky calls the three major singularity schools centers on the idea that minds creating smarter minds could enter a feedback loop strong enough to go "foom" in very little time.

See also

Blog posts

Posts by Eliezer Yudkowsky: