Difference between revisions of "AI takeoff"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
'''A.I takeoff''' refers to a point in the future where [[Artificial General Intelligence]] expands to become an SAI. The speed at which a AGI would expand can be split into “soft” and “hard” takeoff scenarios.
+
'''AI takeoff''' refers to a point in the future where [[Artificial General Intelligence]] recursively self-improves. This will lead to an increase in intelligence, and will likely lead to an increase in computing power and other resources. The speed at which a AGI may expand is usually split into “soft” and “hard” takeoff scenarios.
  
“[[soft takeoff]]” scenarios are ones where an AGI's progression from standard intelligence to a SAI occurs on a time scale that allows human interaction. By maintaining control of the AGI’s ascent it should be possible for a [[Friendly AI]] to emerge.
+
==Soft takeoff==
  
A “[[hard takeoff]]” is widely considered much more precarious, as this involves an AGI rapidly ascending to a SAI without human control. This may result in unexpected behavior (i.e [[Unfriendly AI]]). A [[hard takeoff]] can either be defined as a system with vastly greater intelligence or one that has acquired extensive computing resources (eg. control of the Internet).  
+
A '''soft takeoff''' refers to an SAI that would self-improve over a period of months, years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real time. By maintaining control of the AGI’s ascent it should be easier for a [[Friendly AI]] to emerge.
  
==Blog Posts==
+
Vernor Vinge, Hans Moravec and Ray Kurzweil have all expressed the view that soft takeoff is  preferable to a hard takeoff as it would be both safer and easier to engineer.
 +
 
 +
==Hard takeoff==
 +
 
 +
A '''hard takeoff''' refers to AGI expansion in a matter of minutes, hours or days. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. [[Unfriendly AI]]). A hard takeoff can either be defined as a system with vastly greater intelligence or one that has acquired extensive computing resources (e.g. control of the Internet).
 +
 
 +
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. However, it is widely agreed that a hard takeoff is something to be avoided due to the risks.
 +
 
 +
Although several science fiction authors have speculated that a hard takeoff may happen by accident—for example, “The Internet waking up”—this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
 +
 
 +
==Blog posts==
  
 
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
 
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
  
*[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J Storrs Hall President of The Foresight Institute
+
==External links==
  
 +
*[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J. Storrs Hall President of The Foresight Institute
 
*[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard take off Hypothesis] by Ben Goertzel.
 
*[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard take off Hypothesis] by Ben Goertzel.
 
==External Links==
 
 
 
*[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive archive of Hard takeoff Essays] from Accelerating Future
 
*[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive archive of Hard takeoff Essays] from Accelerating Future
 
 
*[http://www-rohan.sdsu.edu/faculty/vinge/misc/ac2005/ Can we avoid a hard take off?] by Vernor Vinge
 
*[http://www-rohan.sdsu.edu/faculty/vinge/misc/ac2005/ Can we avoid a hard take off?] by Vernor Vinge
 +
*[http://www.amazon.co.uk/Robot-Mere-Machine-Transcendent-Mind/dp/0195136306 Robot: Mere Machine to Transcendent Mind] by Hans Moravec
 +
*[http://www.amazon.co.uk/The-Singularity-Near-Raymond-Kurzweil/dp/0715635611/ref=sr_1_1?s=books&ie=UTF8&qid=1339495098&sr=1-1 The Singularity is Near] by Ray Kurzweil
  
 
==See Also==
 
==See Also==
 
*[[Seed AI]]
 
*[[Seed AI]]
*[[Hard takeoff]]
+
*[[Singularity]]
*[[Soft takeoff]]
+
*[[Intelligence explosion]]
 +
*[[Recursive self-improvement]]

Revision as of 08:29, 30 June 2012

AI takeoff refers to a point in the future where Artificial General Intelligence recursively self-improves. This will lead to an increase in intelligence, and will likely lead to an increase in computing power and other resources. The speed at which a AGI may expand is usually split into “soft” and “hard” takeoff scenarios.

Soft takeoff

A soft takeoff refers to an SAI that would self-improve over a period of months, years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real time. By maintaining control of the AGI’s ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and Ray Kurzweil have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff refers to AGI expansion in a matter of minutes, hours or days. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). A hard takeoff can either be defined as a system with vastly greater intelligence or one that has acquired extensive computing resources (e.g. control of the Internet).

The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, it is widely agreed that a hard takeoff is something to be avoided due to the risks.

Although several science fiction authors have speculated that a hard takeoff may happen by accident—for example, “The Internet waking up”—this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.

Blog posts

External links

See Also