Difference between revisions of "Hard takeoff"

From Lesswrongwiki
Jump to: navigation, search
(Redirected page to AI takeoff#Hard takeoff)
 
(3 intermediate revisions by one other user not shown)
Line 1: Line 1:
A '''hard takeoff''' refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “[[soft takeoff]]”, due to the possibility of an AGI behaving in unexpected ways (ie. [[Unfriendly AI]]) with less opportunity to intervene before damage was done. In this scenario as long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI.
+
#REDIRECT [[AI takeoff#Hard takeoff]]
 
 
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov.
 
 
 
Although several science fiction authors have speculated that an AGI “hard takeoff” may happen by accident - for example, “The Internet waking-up” -  this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
 
 
 
==Blog Posts==
 
 
 
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
 
 
 
*[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J Storrs Hall President of The Foresight Institute
 
 
 
*[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard takeoff Hypothesis] by Ben Goertzel.
 
 
 
==External Links==
 
 
 
*[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive Hard takeoff Resources] from Accelerating Future
 
 
 
==See Also==
 
 
 
*[[Intelligence Explosion]]
 
*[[Soft takeoff]]
 
*[[Artificial General Intelligence]]
 
*[[Singularity]]
 

Latest revision as of 08:12, 30 June 2012