AI takeoff

From Lesswrongwiki
Revision as of 01:43, 13 June 2012 by Daniel Trenor (talk | contribs) (See Also)
Jump to: navigation, search

A.I takeoff refers to a point in the future where Artificial General Intelligence is able to self-improve without human assistance. There is divided opinion on how rapidly the intelligence of an AGI would expand, these can be split into “soft” and “hard” takeoff scenarios.

A “soft takeoff” assumes that the system would require months, years or decades before it was able to self-assemble into an AGI. This would be due to either the algorithms being too demanding for the hardware or because the AI’s design relied upon experiening feedback from the real-world in real-time.

A “hard takeoff” would involve the creation of a AGI in a matter of minutes, hours or days. This scenario is widely considered much more precarious, due to the possibility of an AGI behaving in unexpected ways (i.e Unfriendly AI) - unlike a “soft takeoff” there may not be an opportunity to intervene before something went very wrong.

Both scenarios are limited by the power of the hardware, once the self-improving software has optimized the computing substrate it could propose the design of better hardware (assuming that it wasn’t somehow capable of endlessly expanding its own processing power). If an algorithm is usefully scalable simply by adding more hardware the speed at which the AGI’s intelligence expanded would depend on economic investment and the time it would take to attach the new hardware.

Blog Posts

External Links

See Also