Hard takeoff

From Lesswrongwiki
Revision as of 23:54, 18 June 2012 by Daniel Trenor (talk | contribs)
Jump to: navigation, search

A hard takeoff refers to the creation of an SAI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “soft takeoff”, due to the possibility of an SAI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done.

The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, it is widely agreed that a hard takeoff is something to be avoided due to the risks.

Although several science fiction authors have speculated that an SAI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.

Blog Posts

External Links

See Also