Difference between revisions of "Hard takeoff"
m (→External Links) |
m |
||
Line 1: | Line 1: | ||
− | A '''hard takeoff''' refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “[[soft takeoff]]”, due to the possibility of an AGI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done. As long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI. | + | A '''hard takeoff''' refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “[[soft takeoff]]”, due to the possibility of an AGI behaving in unexpected ways (ie. [[Unfriendly AI]]) with less opportunity to intervene before damage was done. As long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI. |
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. However, there remains debate over whether a “hard takeoff” engineering strategy should be adopted due to the possible risks involved. | The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. However, there remains debate over whether a “hard takeoff” engineering strategy should be adopted due to the possible risks involved. |
Revision as of 20:45, 12 June 2012
A hard takeoff refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “soft takeoff”, due to the possibility of an AGI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done. As long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI.
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, there remains debate over whether a “hard takeoff” engineering strategy should be adopted due to the possible risks involved.
Although several science fiction authors have speculated that an AGI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
Blog Posts
- Hard Takeoff by Eliezer Yudkowsky
- The Age of Virtuous Machines by J Storrs Hall President of The Foresight Institute
- Hard takeoff Hypothesis by Ben Goertzel.
External Links
- Extensive Hard takeoff Resources from Accelerating Future