Difference between revisions of "Computing overhang"

From Lesswrongwiki
Jump to: navigation, search
m (fixed link)
 
(27 intermediate revisions by 5 users not shown)
Line 1: Line 1:
There exists '''computing overhang''' when there is much more computing power available for a given process than can be taken advantage of by current software algorithms. If this is true for AI software, then new discoveries in deliberative or recursively self-improving algorithms could bring about an [[intelligence explosion]].
+
'''Computing overhang''' refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.
  
As an example, imagine that humans had computers with which to add, but were unaware of the simple algorithm for multiplication. Naïve programs for multiplication would execute "228*537" by performing two hundred twenty eight additions of the number 537 with itself. With at least three operations per add, this would require at least 1611 operations. Once someone discovered the FOIL algorithm, the computation could be replace with "(200 + 20 + 8)*(500 + 30 + 7)", with about 30 operations. In the field of Artificial Intelligence, is possible that current algorithms for prediction, planning, et cetera use considerably sub-optimal computations, and that other algorithms could perform better on less hardware.
+
In the context of [[Artificial General Intelligence]], this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an [[intelligence explosion]], or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an [[existential risk]].
  
An analogous circumstance occurred in particle physics during the 1960s. Experimental physicists had discovered hundreds of subatomic particles with different properties. Physicists at the time joked that there was a "particle zoo", with far more inhabitants than theory could account for. It was considered unlikely that hundreds of particles were fundamental. Eventually the theory of quarks was found by two people independently, and the hundreds of particles were immediately explained. Later experimentation confirmed that six quarks composed all of the particles in the zoo.
+
==Examples==
 +
In 2010, the President's Council of Advisors on Science and Technology [http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf reported on] benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.
  
Enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects typically grow to fill these resources by using deeper search trees, like Rybka's solving of the King's Gambit opening in chess, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that this simple brute-force extension of techniques is not the optimal use of the computing resources. The computational power required for general intelligence is at most that used by the human brain. Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This is because evolution had no insight in creating the human mind. Neural algorithms have not gotten any better since they were first evolved; therefore Homo sapiens are in some sense just above the boundary for general intelligence.
+
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.
  
==External links==
+
Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren't deliberatly created with the goal of being modeled by AI.
*[http://chessbase.com/newsdetail.asp?newsid=8047 King's gambit solved]
+
 
 +
As Yudkoswky [http://intelligence.org/files/LOGI.pdf puts it], human intelligence, created by this "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - ''deliberation''. On the other hand, the rest and almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. As such, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that's completely different than the process that gave birth to our brains.
  
 
==References==
 
==References==
Line 29: Line 31:
 
  | place = Berlin
 
  | place = Berlin
 
  | publisher = Springer
 
  | publisher = Springer
  | contribution-url = http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf
+
  | contribution-url = http://intelligence.org/files/IE-EI.pdf
 
}}
 
}}
 +
 +
==See also==
 +
*[[Optimization process]]
 +
*[[Optimization]]

Latest revision as of 03:51, 26 August 2017

Computing overhang refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.

In the context of Artificial General Intelligence, this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.

Examples

In 2010, the President's Council of Advisors on Science and Technology reported on benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren't deliberatly created with the goal of being modeled by AI.

As Yudkoswky puts it, human intelligence, created by this "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - deliberation. On the other hand, the rest and almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. As such, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that's completely different than the process that gave birth to our brains.

References

  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer. 

See also