Difference between revisions of "Hard takeoff"

From Lesswrongwiki
Jump to: navigation, search
m (byline removal)
Line 1: Line 1:
If a [[technological singularity]] happens especially quickly, it's called a '''hard takeoff'''. (Otherwise, it's called a [[soft takeoff]].)
+
A '''hard takeoff''' refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a [[soft takeoff]]”, due to the possibility of an AGI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done. As long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI.
  
What "especially quickly" means is open to interpretation. One important question is whether the outside world has time to react.
+
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. However, there remains debate over whether a “hard takeoff” engineering strategy should be adopted due to the possible risks involved.
  
==Blog posts==
+
Although several science fiction authors have speculated that an AGI “hard takeoff” may happen by accident - for example, “The Internet waking-up” -  this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
  
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff]
+
==Blog Posts==
  
==See also==
+
*[http://lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky
  
*[[Intelligence explosion]]
+
*[http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J Storrs Hall President of The Foresight Institute
  
{{stub}}
+
*[http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard takeoff Hypothesis] by Ben Goertzel.
[[Category:Future]]
+
 
 +
==External Links==
 +
 
 +
*[http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ extensive Hard takeoff Resources] from Accelerating Future
 +
 
 +
==See Also==
 +
 
 +
*[[Intelligence Explosion]]
 +
*[[Soft takeoff]]
 +
*[[Artificial General Intelligence]]
 +
*[[Singularity]]

Revision as of 20:33, 12 June 2012

A hard takeoff refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “soft takeoff”, due to the possibility of an AGI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done. As long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI.

The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, there remains debate over whether a “hard takeoff” engineering strategy should be adopted due to the possible risks involved.

Although several science fiction authors have speculated that an AGI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.

Blog Posts

External Links

See Also