Difference between revisions of "A.I takeoff"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "'''A.I takeoff''' refers to a point in the future where Artificial General Intelligence is able to self-improve without human assistance. There is divided opinion on how rapi...")
 
(Redirected page to AI takeoff)
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''A.I takeoff''' refers to a point in the future where [[Artificial General Intelligence]] is able to self-improve without human assistance. There is divided opinion on how rapidly the intelligence of an AGI would expand,  these can be split into “soft” and “hard” takeoff scenarios.
+
#REDIRECT [[AI takeoff]]
 
 
A “[[soft takeoff]]” assumes that the system would require months, years or decades before it was able to self-assemble into an AGI. This would be due to either the algorithm being too demanding for the hardware or because the AI’s learning algorithm was designed to experience feedback from the real-world in real-time.
 
 
 
A “[[hard takeoff]]” would involve the creation of a AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious, due to the possibility of an AGI behaving in unexpected ways (i.e [[Unfriendly AI]]) - unlike a “[[soft takeoff]]” there may not be an opportunity to intervene before something went very wrong.
 
 
 
Both scenarios are limited by the power of the hardware, once an algorithm has optimized the computing substrate it could propose the design of better hardware (assuming that it wasn’t somehow capable of endlessly expanding its own processing power).  If an algorithm is usefully scalable simply by adding more hardware the speed at which the AGI’s intelligence expanded would depend on economic investment and the time it would take to attach the new hardware.
 
 
 
==Blog Posts==
 
 
 
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
 
 
 
*[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J Storrs Hall President of The Foresight Institute
 
 
 
*[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard take off Hypothesis] by Ben Goertzel.
 
 
 
==External Links==
 
 
 
*[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive archive of Hard takeoff Essays] from Accelerating Future
 
 
 
*[http://www-rohan.sdsu.edu/faculty/vinge/misc/ac2005/ Can we avoid a hard take off?] by Vernor Vinge
 
 
 
==See Also==
 
 
 
*[[Hard takeoff]]
 
*[[Soft takeoff]]
 

Latest revision as of 01:41, 13 June 2012

Redirect to: