Difference between revisions of "AGI skepticism"

From Lesswrongwiki
Jump to: navigation, search
m (See Also)
Line 1: Line 1:
'''AGI skepticism''' involves objections to the possibility of [[Artificial General Intelligence]] being developed in the near future. An argument is that although AGI is possible in principle, there is no reason to expect it in the near future. Typically, this is due to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote ([http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity IEEE Spectrum 2008]). Bringsjord et al. ([http://kryten.mm.rpi.edu/SB_AB_PB_sing_fideism_022412.pdf 2012]) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but not within science or engineering.
+
'''AGI skepticism''' involves objections to the possibility of [[Artificial General Intelligence]] being developed in the near future. Skeptics include [http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity various technology and science luminaries] such as Douglas Hofstadter, Gordon Bell, Steven Pinker, and Gordon Moore:
  
Some skeptics not only disagree with AGI being near, but also criticize any discussion of AGI risk in the first place. In their view, such discussion diverts attention from more important issues. Dennett ([http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00005 2012]) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise, the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (Nordmann [http://commonsenseatheism.com/wp-content/uploads/2011/02/nordmann-if-and-then-a-critique-of-speculative-nanoethics.pdf 2007], [http://spectrum.ieee.org/robotics/robotics-software/singular-simplicity 2009]).
+
<blockquote>"It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries." -- Douglas Hofstadter</blockquote>
  
Others agree that AGI is still far away and not yet a major concern, but admit that it might be valuable to give the issue some attention. An AAAI presidential panel on long-term AI futures concluded that there was overall skepticism about AGI risk, but that additional research into the topic and related subjects would be valuable ([http://www.aaai.org/Organization/Panel/panel-note.pdf Horvitz & Selman 2009]).
+
A typical argument is that we currently only have narrow AI, and that there is no sign of progress towards general intelligence. Some critics have gone as far as to argue that predictions of near-term AGI [http://kryten.mm.rpi.edu/SB_AB_PB_sing_fideism_022412.pdf belong to the realm of religion], not science or engineering.
 +
 
 +
Some skeptics go even more far, saying that discussion about AGI risk is a dangerous of waste time that diverts attention from more important issues. [http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00005 Daniel Dennett] considers AGI risk an "imprudent pastime" because it distracts our attention from more immediate threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios ([http://commonsenseatheism.com/wp-content/uploads/2011/02/nordmann-if-and-then-a-critique-of-speculative-nanoethics.pdf 1], [http://spectrum.ieee.org/robotics/robotics-software/singular-simplicity 2]).
 +
 
 +
There are also skeptics who think that the prospect of near-term AGI seems remote, but don't go so far as to dismiss the issue entirely. An ([http://www.aaai.org/Organization/Panel/panel-note.pdf AAAI presidential panel on long-term AI futures]) concluded that
 +
 
 +
<blockquote>There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.</blockquote>
  
 
==External Links==
 
==External Links==

Revision as of 19:21, 7 November 2012

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. Skeptics include various technology and science luminaries such as Douglas Hofstadter, Gordon Bell, Steven Pinker, and Gordon Moore:

"It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries." -- Douglas Hofstadter

A typical argument is that we currently only have narrow AI, and that there is no sign of progress towards general intelligence. Some critics have gone as far as to argue that predictions of near-term AGI belong to the realm of religion, not science or engineering.

Some skeptics go even more far, saying that discussion about AGI risk is a dangerous of waste time that diverts attention from more important issues. Daniel Dennett considers AGI risk an "imprudent pastime" because it distracts our attention from more immediate threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (1, 2).

There are also skeptics who think that the prospect of near-term AGI seems remote, but don't go so far as to dismiss the issue entirely. An (AAAI presidential panel on long-term AI futures) concluded that

There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.

External Links

See Also