Difference between revisions of "AI takeoff"

From Lesswrongwiki
Jump to: navigation, search
(See Also)
Line 1: Line 1:
'''A.I takeoff''' refers to a point in the future where [[Artificial General Intelligence]] is able to self-improve without human assistance. There is divided opinion on how rapidly the intelligence of an AGI would expand,  these can be split into “soft” and “hard” takeoff scenarios.
+
'''A.I takeoff''' refers to a point in the future where [[Artificial General Intelligence]] expands to become an SAI. The speed at which a AGI would expand can be split into “soft” and “hard” takeoff scenarios.
  
A “[[soft takeoff]]” assumes that the system would require months, years or decades before it was able to self-assemble into an AGI. This would be due to either the algorithms being too demanding for the hardware or because the AI’s design relied upon experiening feedback from the real-world in real-time.
+
“[[soft takeoff]]” scenarios are ones where an AGI's progression from standard intelligence to a SAI occurs on a time scale that allows human interaction. By maintaining control of the AGI’s ascent it should be possible for a [[Friendly AI]] to emerge.
  
A “[[hard takeoff]]” would involve the creation of a AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious, due to the possibility of an AGI behaving in unexpected ways (i.e [[Unfriendly AI]]) - unlike a “[[soft takeoff]]” there may not be an opportunity to intervene before something went very wrong.
+
A “[[hard takeoff]]” is widely considered much more precarious, as this involves an AGI rapidly ascending to a SAI without human control. This may result in unexpected behavior (i.e [[Unfriendly AI]]). A [[hard takeoff]] can either be defined as a system with vastly greater intelligence or one that has acquired extensive computing resources (eg. control of the Internet).  
 
 
Both scenarios are limited by the power of the hardware, once the self-improving software has optimized the computing substrate it could propose the design of better hardware (assuming that it wasn’t somehow capable of endlessly expanding its own processing power).  If an algorithm is usefully scalable simply by adding more hardware the speed at which the AGI’s intelligence expanded would depend on economic investment and the time it would take to attach the new hardware.
 
  
 
==Blog Posts==
 
==Blog Posts==

Revision as of 00:27, 17 June 2012

A.I takeoff refers to a point in the future where Artificial General Intelligence expands to become an SAI. The speed at which a AGI would expand can be split into “soft” and “hard” takeoff scenarios.

soft takeoff” scenarios are ones where an AGI's progression from standard intelligence to a SAI occurs on a time scale that allows human interaction. By maintaining control of the AGI’s ascent it should be possible for a Friendly AI to emerge.

A “hard takeoff” is widely considered much more precarious, as this involves an AGI rapidly ascending to a SAI without human control. This may result in unexpected behavior (i.e Unfriendly AI). A hard takeoff can either be defined as a system with vastly greater intelligence or one that has acquired extensive computing resources (eg. control of the Internet).

Blog Posts

External Links

See Also