A.I takeoff refers to a point in the future where Artificial General Intelligence is able to self-improve without human assistance. There is divided opinion on how rapidly the intelligence of an AGI would expand, these can be split into “soft” and “hard” takeoff scenarios.
A “soft takeoff” assumes that the system would require months, years or decades before it was able to self-assemble into an AGI. This would be due to either the algorithm being too demanding for the hardware or because the AI’s learning algorithm was designed to experience feedback from the real-world in real-time.
A “hard takeoff” would involve the creation of a AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious, due to the possibility of an AGI behaving in unexpected ways (i.e Unfriendly AI) - unlike a “soft takeoff” there may not be an opportunity to intervene before something went very wrong.
Both scenarios are limited by the power of the hardware, once an algorithm has optimized the computing substrate it could propose the design of better hardware (assuming that it wasn’t somehow capable of endlessly expanding its own processing power). If an algorithm is usefully scalable simply by adding more hardware the speed at which the AGI’s intelligence expanded would depend on economic investment and the time it would take to attach the new hardware.
- Hard Takeoff by Eliezer Yudkowsky
- The Age of Virtuous Machines by J Storrs Hall President of The Foresight Institute
- Hard take off Hypothesis by Ben Goertzel.
- Extensive archive of Hard takeoff Essays from Accelerating Future
- Can we avoid a hard take off? by Vernor Vinge