Difference between revisions of "Computing overhang"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
'''Computing overhang''' refers to a situation when algorithms are created that can exploit the large amounts of avaliable computing power in a more efficient way than before. This limit on the use of avaliable resources can occur unintentionally, when developing a sub-optimal algorithm, or on purpose - through planned bottlenecking, for example.
+
'''Computing overhang''' refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.
In AI development, if new discoveries in deliberative or recursively self-improving algorithms are made, it could bring about a rapid shift from human control to AI control. This could occur from an [[intelligence explosion]], or by a simple mass multiplication of AIs which would suddenly bypass this limitation and take advantage of all the computing power avaliable.
 
  
This overhang could then be exploited during this explosion by an AI using all its resources to become more intelligent or multiple different AIs battling for such resouces. Theoretically, this distinction isn't very clear nor useful, as the scenarios produced by both situations would likely [http://www.nickbostrom.com/fut/singleton.html converge].
+
In the context of [[Artificial General Intelligence]], this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an [[intelligence explosion]], or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an [[existential risk]].
  
 
==Examples==
 
==Examples==
As an example, consider the [http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf President Council report (2010)], mentioning an improvement by a factor of 43 million on a benchmark production planning model. In this improvement however, only a factor of near 1000 was due to better hardware, while a factor of 43000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use more efficiently the avaliable computing power.
+
In 2010, the President's Council of Advisors on Science and Technology [http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf reported on] benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.
  
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the avaliable computing resources and the need for improvement is on the side of these implementations.
+
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the available computing resources and the need for improvement is on the side of these implementations.
  
 
Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI, it evolved adapted to a ''human'' context.
 
Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI, it evolved adapted to a ''human'' context.

Revision as of 21:53, 8 October 2012

Computing overhang refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.

In the context of Artificial General Intelligence, this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.

Examples

In 2010, the President's Council of Advisors on Science and Technology reported on benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the available computing resources and the need for improvement is on the side of these implementations.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI, it evolved adapted to a human context.

References

  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer. 

See also