A hard takeoff refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “soft takeoff”, due to the possibility of an AGI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done. In this scenario as long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI.
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, it is widely agreed that a hard takeoff' is something to be avoided due to the risks.
Although several science fiction authors have speculated that an AGI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
- Hard Takeoff by Eliezer Yudkowsky
- The Age of Virtuous Machines by J Storrs Hall President of The Foresight Institute
- Hard takeoff Hypothesis by Ben Goertzel.
- Extensive Hard takeoff Resources from Accelerating Future