Difference between revisions of "Computing overhang"

From Lesswrongwiki
Jump to: navigation, search
Line 3: Line 3:
  
 
==Examples==
 
==Examples==
As an example, imagine that humans had computers with which to add, but were unaware of the simple algorithm for multiplication. Naïve programs for multiplication would execute "228*537" by performing two hundred twenty eight additions of the number 537 with itself. With at least three operations per add, this would require at least 1611 operations. Once someone discovered the FOIL algorithm, the computation could be replace with "(200 + 20 + 8)*(500 + 30 + 7)", with about 30 operations. In the field of Artificial Intelligence, it seems possible that current algorithms for prediction, planning, etc., use considerably sub-optimal computations, and that other algorithms could perform better on less hardware.
+
As an example, imagine that humans had computers with which to add, but were unaware of the simple algorithm for multiplication. Naïve programs for multiplication would execute "228*537" by performing two hundred twenty eight additions of the number 537 with itself. With at least three operations per add, this would require at least 1611 operations. Once someone discovered the FOIL algorithm, the computation could be replace with "(200 + 20 + 8)*(500 + 30 + 7)", with about 30 operations. In the field of Artificial Intelligence, it seems possible that current algorithms for prediction, planning, etc., use considerably sub-[[optimization|optimal]] computations, and that other algorithms could perform better on less hardware.
  
 
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the computing resources. The computational power required for general intelligence is at most that used by the human brain.  
 
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the computing resources. The computational power required for general intelligence is at most that used by the human brain.  

Revision as of 09:03, 1 October 2012

Computing overhang refers to a situation when there is much more computing power available for a given process than can be taken advantage of by current software algorithms. The concept has some implications for the development of AI. If new discoveries in deliberative or recursively self-improving algorithms are made, it could bring about a rapid shift from human control to AI control. This could occur from an intelligence explosion, or by a simple mass multiplication of AIs which would suddenly take advantage of all the computing power avaliable.

Examples

As an example, imagine that humans had computers with which to add, but were unaware of the simple algorithm for multiplication. Naïve programs for multiplication would execute "228*537" by performing two hundred twenty eight additions of the number 537 with itself. With at least three operations per add, this would require at least 1611 operations. Once someone discovered the FOIL algorithm, the computation could be replace with "(200 + 20 + 8)*(500 + 30 + 7)", with about 30 operations. In the field of Artificial Intelligence, it seems possible that current algorithms for prediction, planning, etc., use considerably sub-optimal computations, and that other algorithms could perform better on less hardware.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the computing resources. The computational power required for general intelligence is at most that used by the human brain.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind - neural algorithms have not gotten any better since they were first evolved; therefore Homo sapiens are in some sense just above the boundary for general intelligence.

References

  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.