Difference between revisions of "AI takeoff"
|Line 29:||Line 29:|
== References ==
== References ==
Revision as of 20:45, 21 June 2017
AI takeoff refers to a point in the future where Artificial General Intelligence becomes considerably powerful, probably through Recursive self-improvement. This will lead to an increase in intelligence, and will likely lead to an increase in computing power and other resources. The speed at which an AGI powers may expand is usually split into “soft” and “hard” takeoff scenarios.
A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement and software based strong AGI. By maintaining control of the AGI's ascent it should be easier for a Friendly AI to emerge.
Vernor Vinge, Hans Moravec and Ray Kurzweil have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.
A hard takeoff (or an AI going "FOOM") refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.
The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large resources overhangs or the fact that small improvements seems to have a large impact in a mind's general intelligence (i.e.:small genetic difference between humans and chimps lead to huge increases in capability).
- Hard Takeoff by Eliezer Yudkowsky
- The Age of Virtuous Machines by J. Storrs Hall President of The Foresight Institute
- Hard take off Hypothesis by Ben Goertzel.
- Extensive archive of Hard takeoff Essays from Accelerating Future
- Can we avoid a hard take off? by Vernor Vinge
- Robot: Mere Machine to Transcendent Mind by Hans Moravec
- The Singularity is Near by Ray Kurzweil