Difference between revisions of "Hard takeoff"

From Lesswrongwiki
Jump to: navigation, search
m
Line 1: Line 1:
A '''hard takeoff''' refers to the creation of an AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “[[soft takeoff]]”, due to the possibility of an AGI behaving in unexpected ways (ie. [[Unfriendly AI]]) with less opportunity to intervene before damage was done. In this scenario as long as a system had adequate hardware the AGI would also rapidly accelerate into a SAI.
+
A '''hard takeoff''' refers to the creation of an SAI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “[[soft takeoff]]”, due to the possibility of an SAI behaving in unexpected ways (ie. [[Unfriendly AI]]) with less opportunity to intervene before damage was done.  
  
 
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. However, it is widely agreed that a '''hard takeoff''' is something to be avoided due to the risks.
 
The feasibility of “hard takeoff” has been addressed by Hugo de Garis, [[Eliezer Yudkowsky]], [[Ben Goertzel]], [[Nick Bostrom]] and Michael Anissimov. However, it is widely agreed that a '''hard takeoff''' is something to be avoided due to the risks.
  
Although several science fiction authors have speculated that an AGI “hard takeoff” may happen by accident - for example, “The Internet waking-up” -  this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
+
Although several science fiction authors have speculated that an SAI “hard takeoff” may happen by accident - for example, “The Internet waking-up” -  this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.
  
 
==Blog Posts==
 
==Blog Posts==

Revision as of 00:54, 19 June 2012

A hard takeoff refers to the creation of an SAI in a matter of minutes, hours or days. This scenario is widely considered much more precarious than a “soft takeoff”, due to the possibility of an SAI behaving in unexpected ways (ie. Unfriendly AI) with less opportunity to intervene before damage was done.

The feasibility of “hard takeoff” has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. However, it is widely agreed that a hard takeoff is something to be avoided due to the risks.

Although several science fiction authors have speculated that an SAI “hard takeoff” may happen by accident - for example, “The Internet waking-up” - this opinion is largely dismissed by computer scientist as intelligence is considered to be a hard problem.

Blog Posts

External Links

See Also