Difference between revisions of "AI takeoff"

From Lesswrongwiki
Jump to: navigation, search
(See Also)
 
(22 intermediate revisions by 6 users not shown)
Line 1: Line 1:
'''A.I takeoff''' refers to a point in the future where [[Artificial General Intelligence]] is able to self-improve without human assistance. There is divided opinion on how rapidly the intelligence of an AGI would expand,  these can be split into “soft” and “hard” takeoff scenarios.
+
'''AI takeoff''' refers to a point in the future where [[Artificial General Intelligence]] becomes considerably powerful, probably through [[Recursive self-improvement]]. This will lead to an increase in intelligence, and will likely lead to an increase in computing power and other resources. The speed at which an AGI powers may expand is usually split into “soft” and “hard” takeoff scenarios.
  
A “[[soft takeoff]]” assumes that the system would require months, years or decades before it was able to self-assemble into an AGI. This would be due to either the algorithms being too demanding for the hardware or because the AI’s design relied upon experiening feedback from the real-world in real-time.
+
==Soft takeoff==
 +
A '''soft takeoff''' refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are [[Whole brain emulation]], [[Biological Cognitive Enhancement]] and software based strong AGI.<ref>http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html</ref> By maintaining control of the AGI's ascent it should be easier for a [[Friendly AI]] to emerge.
  
A “[[hard takeoff]]” would involve the creation of a AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious, due to the possibility of an AGI behaving in unexpected ways (i.e [[Unfriendly AI]]) - unlike a “[[soft takeoff]]” there may not be an opportunity to intervene before something went very wrong.
+
Vernor Vinge, Hans Moravec and {{hppl|Ray Kurzweil}} have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.
  
Both scenarios are limited by the power of the hardware, once the self-improving software has optimized the computing substrate it could propose the design of better hardware (assuming that it wasn’t somehow capable of endlessly expanding its own processing power). If an algorithm is usefully scalable simply by adding more hardware the speed at which the AGI’s intelligence expanded would depend on economic investment and the time it would take to attach the new hardware.
+
==Hard takeoff==
 +
A '''hard takeoff''' (or an AI going "'''FOOM'''"<ref>http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/</ref>) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. [[Unfriendly AI]]). It is one of the main ideas supporting the [[Intelligence explosion]] hypothesis.
  
==Blog Posts==
+
The feasibility of hard takeoff has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large [[computing overhang|resources overhangs]] or the fact that small improvements seems to have a large impact in a mind's general intelligence (i.e.:small genetic difference between humans and chimps lead to huge increases in capability).<ref>http://lesswrong.com/lw/wf/hard_takeoff/</ref>
  
 +
==Blog posts==
 
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
 
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
  
*[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J Storrs Hall President of The Foresight Institute
+
==External links==
 
+
*[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J. Storrs Hall President of The Foresight Institute
 
*[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard take off Hypothesis] by Ben Goertzel.
 
*[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard take off Hypothesis] by Ben Goertzel.
 
==External Links==
 
 
 
*[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive archive of Hard takeoff Essays] from Accelerating Future
 
*[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive archive of Hard takeoff Essays] from Accelerating Future
 
 
*[http://www-rohan.sdsu.edu/faculty/vinge/misc/ac2005/ Can we avoid a hard take off?] by Vernor Vinge
 
*[http://www-rohan.sdsu.edu/faculty/vinge/misc/ac2005/ Can we avoid a hard take off?] by Vernor Vinge
 +
*[http://www.amazon.co.uk/Robot-Mere-Machine-Transcendent-Mind/dp/0195136306 Robot: Mere Machine to Transcendent Mind] by Hans Moravec
 +
*[http://www.amazon.co.uk/The-Singularity-Near-Raymond-Kurzweil/dp/0715635611/ref=sr_1_1?s=books&ie=UTF8&qid=1339495098&sr=1-1 The Singularity is Near] by Ray Kurzweil
  
 
==See Also==
 
==See Also==
 
*[[Seed AI]]
 
*[[Seed AI]]
*[[Hard takeoff]]
+
*[[Singularity]]
*[[Soft takeoff]]
+
*[[Intelligence explosion]]
 +
*[[Recursive self-improvement]]
 +
== References ==
 +
<references />
 +
[[Category:AI]]
 +
[[Category:AI safety]]

Latest revision as of 21:45, 21 June 2017

AI takeoff refers to a point in the future where Artificial General Intelligence becomes considerably powerful, probably through Recursive self-improvement. This will lead to an increase in intelligence, and will likely lead to an increase in computing power and other resources. The speed at which an AGI powers may expand is usually split into “soft” and “hard” takeoff scenarios.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement and software based strong AGI.[1] By maintaining control of the AGI's ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and Ray KurzweilH+LogoHR.png have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going "FOOM"[2]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large resources overhangs or the fact that small improvements seems to have a large impact in a mind's general intelligence (i.e.:small genetic difference between humans and chimps lead to huge increases in capability).[3]

Blog posts

External links

See Also

References