Difference between revisions of "Computing overhang"

From Lesswrongwiki
Jump to: navigation, search
Line 2: Line 2:
 
The concept has some implications for the development of AI. If new discoveries in deliberative or recursively self-improving algorithms are made, it could bring about a rapid shift from human control to AI control. This could occur from an [[intelligence explosion]], or by a simple mass multiplication of AIs which would suddenly take advantage of all the computing power avaliable.
 
The concept has some implications for the development of AI. If new discoveries in deliberative or recursively self-improving algorithms are made, it could bring about a rapid shift from human control to AI control. This could occur from an [[intelligence explosion]], or by a simple mass multiplication of AIs which would suddenly take advantage of all the computing power avaliable.
  
This overhang could then be exploited during this explosion by an AI using all its resources to become more intelligent or multiple different AIs battling for such resouces. Theoretically, this [http://www.nickbostrom.com/fut/singleton.html distinction] isn't very clear nor useful, as the scenarios produced by both situations are similar and exactly what we want to protect against.  
+
This overhang could then be exploited during this explosion by an AI using all its resources to become more intelligent or multiple different AIs battling for such resouces. Theoretically, this distinction isn't very clear nor useful, as the scenarios produced by both situations would likely [http://www.nickbostrom.com/fut/singleton.html converge].
  
 
==Examples==
 
==Examples==

Revision as of 02:09, 4 October 2012

Computing overhang refers to a situation when there is much more computing power available for a given process than can be taken advantage of by current software algorithms. The concept has some implications for the development of AI. If new discoveries in deliberative or recursively self-improving algorithms are made, it could bring about a rapid shift from human control to AI control. This could occur from an intelligence explosion, or by a simple mass multiplication of AIs which would suddenly take advantage of all the computing power avaliable.

This overhang could then be exploited during this explosion by an AI using all its resources to become more intelligent or multiple different AIs battling for such resouces. Theoretically, this distinction isn't very clear nor useful, as the scenarios produced by both situations would likely converge.

Examples

As an example, imagine a database that you need to copy to another location but in a different arrangement. You know how to implement the copying process but have no knowledge of how to transpose or rotate its items, so you have to do it by hand, one by one, stating each coordinate of the new placement. If you knew how implement one of this simple operators you could reduce the time of the operations to a tiny fraction of what you would be spending in the first place. In the field of Artificial Intelligence, it seems possible that current algorithms for prediction, planning and other actions use considerably sub-optimal computations, and that other algorithms could perform better on less hardware.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the computing resources. The computational power required for general intelligence is at most that used by the human brain.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI, it evolved adapted to a human context.

References

  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer. 

See also