|
|
Line 1: |
Line 1: |
− | A '''hard takeoff''' refers to the creation of an SAI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “[[soft takeoff]]”, due to the possibility of an SAI behaving in unexpected ways (ie. [[Unfriendly AI]]) with less opportunity to intervene before damage was done.
| + | #REDIRECT [[AI takeoff#Hard takeoff]] |
− | | |
− | The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. However, it is widely agreed that a '''hard takeoff''' is something to be avoided due to the risks.
| |
− | | |
− | Although several science fiction authors have speculated that an SAI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
| |
− | | |
− | ==Blog Posts==
| |
− | | |
− | *[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
| |
− | | |
− | *[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J Storrs Hall President of The Foresight Institute
| |
− | | |
− | *[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard takeoff Hypothesis] by Ben Goertzel.
| |
− | | |
− | ==External Links==
| |
− | | |
− | *[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive Hard takeoff Resources] from Accelerating Future
| |
− | | |
− | ==See Also==
| |
− | | |
− | *[[Intelligence Explosion]]
| |
− | *[[Soft takeoff]]
| |
− | *[[Artificial General Intelligence]]
| |
− | *[[Singularity]]
| |