Hard takeoff

From Lesswrongwiki
Revision as of 08:04, 14 June 2012 by Daniel Trenor (talk | contribs)
Jump to: navigation, search

A hard takeoff refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “soft takeoff”, due to the possibility of an AGI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done. In this scenario as long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI.

The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, it is widely agreed that a hard takeoff' is something to be avoided due to the risks.

Although several science fiction authors have speculated that an AGI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.

Blog Posts

External Links

See Also