AI takeoff

From Lesswrongwiki
Revision as of 06:05, 10 October 2012 by Joaolkf (talk | contribs)
Jump to: navigation, search

AI takeoff refers to a point in the future where Artificial General Intelligence recursively self-improves. This will lead to an increase in intelligence, and will likely lead to an increase in computing power and other resources. The speed at which an AGI may expand is usually split into “soft” and “hard” takeoff scenarios.

Soft takeoff

A soft takeoff refers to an SAI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement and software based strong AGI [1]. By maintaining control of the AGI’s ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and Ray Kurzweil have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that make a hard takeoff more likely than a soft takeoff:"

  • Roughness: A search space can be naturally rough - have unevenly distributed slope. With constant optimization pressure, you could go through a long phase where improvements are easy, then hit a new volume of the search space where improvements are tough. Or vice versa. Call this factor roughness.
  • Resource overhangs: Rather than resources growing incrementally by reinvestment, there's a big bucket o' resources behind a locked door, and once you unlock the door you can walk in and take them all.
  • Cascades[2] are when one development leads the way to another - for example, once you discover gravity, you might find it easier to understand a coiled spring.
  • Cycles[3] are feedback loops where a process's output becomes its input on the next round. As the classic example of a fission chain reaction illustrates, a cycle whose underlying processes are continuous, may show qualitative changes of surface behavior - a threshold of criticality - the difference between each neutron leading to the emission of 0.9994 additional neutrons versus each neutron leading to the emission of 1.0006 additional neutrons. k is the effective neutron multiplication factor and I will use it metaphorically.
  • Insights[4] are items of knowledge that tremendously decrease the cost of solving a wide range of problems - for example, once you have the calculus insight, a whole range of physics problems become a whole lot easier to solve. Insights let you fly through, or teleport through, the solution space, rather than searching it by hand - that is, "insight" represents knowledge about the structure of the search space itself.
  • Recursion[5] is the sort of thing that happens when you hand the AI the object-level problem of "redesign your own cognitive algorithms"."[6]

He then argues that biological human evolution in general and other known optimization process lack many of these features and yielded a linear improvement, but that a human designed AGI would likely contain many of the above mentioned properties[7].

Blog posts

External links

See Also