Hard takeoff
A hard takeoff refers to the creation of an SAI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “soft takeoff”, due to the possibility of an SAI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done.
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, it is widely agreed that a hard takeoff is something to be avoided due to the risks.
Although several science fiction authors have speculated that an SAI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
Blog Posts
- Hard Takeoff by Eliezer Yudkowsky
- The Age of Virtuous Machines by J Storrs Hall President of The Foresight Institute
- Hard takeoff Hypothesis by Ben Goertzel.
External Links
- Extensive Hard takeoff Resources from Accelerating Future