Difference between revisions of "AGI skepticism"

From Lesswrongwiki
Jump to: navigation, search
(See Also)
Line 1: Line 1:
A number of objections have been raised to the possibility of [[Artificial General Intelligence]] being developed any time soon. Many of these arguments stem from opponents directly comparing AGI to human cognition. However, human cognition may have little to do with how AGI’s are eventually engineered.
+
'''AGI skepticism''' involves objections to the possibility of [[Artificial General Intelligence]] being developed in the near future. A argument is that although AGI is possible in principle, there is no reason to expect it in the near future. Typically, this is due to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote ([http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity IEEE Spectrum 2008]). Bringsjord et al. ([http://kryten.mm.rpi.edu/SB_AB_PB_sing_fideism_022412.pdf 2012]) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but not within science or engineering.
  
It has been observed that since the 1950’s there have been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of lack of progress is referred to as the "A.I winter".
+
Some skeptics not only disagree with AGI being near, but also criticize any discussion of AGI risk in the first place. In their view, such discussion diverts attention from more important issues. Dennett ([http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00005 2012]) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise, the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (Nordmann [http://commonsenseatheism.com/wp-content/uploads/2011/02/nordmann-if-and-then-a-critique-of-speculative-nanoethics.pdf 2007], [http://spectrum.ieee.org/robotics/robotics-software/singular-simplicity 2009]).
  
Furthermore, a variety of high profile figures from computer and neuroscience, such as Steven Pinker and Douglas Hofstadter, have suggested that the complexity of intelligence is far greater than AGI advocates appreciate. Even if computing power continues to increase exponentially this does nothing to help understand how an AGI might be built.
+
Others agree that AGI is still far away and not yet a major concern, but admit that it might be valuable to give the issue some attention. An AAAI presidential panel on long-term AI futures concluded that there was overall skepticism about AGI risk, but that additional research into the topic and related subjects would be valuable ([http://www.aaai.org/Organization/Panel/panel-note.pdf Horvitz & Selman 2009]).
  
 
==External Links==
 
==External Links==
  
*[http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity Critics of the feasability of AGI] from ieee.org
 
 
*[http://en.wikipedia.org/wiki/AI_winter A history of the A.I winter] from Wikipedia
 
*[http://en.wikipedia.org/wiki/AI_winter A history of the A.I winter] from Wikipedia
  

Revision as of 23:07, 27 June 2012

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. A argument is that although AGI is possible in principle, there is no reason to expect it in the near future. Typically, this is due to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but not within science or engineering.

Some skeptics not only disagree with AGI being near, but also criticize any discussion of AGI risk in the first place. In their view, such discussion diverts attention from more important issues. Dennett (2012) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise, the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (Nordmann 2007, 2009).

Others agree that AGI is still far away and not yet a major concern, but admit that it might be valuable to give the issue some attention. An AAAI presidential panel on long-term AI futures concluded that there was overall skepticism about AGI risk, but that additional research into the topic and related subjects would be valuable (Horvitz & Selman 2009).

External Links

See Also