Difference between revisions of "AGI skepticism"

From Lesswrongwiki
Jump to: navigation, search
m
m
 
(11 intermediate revisions by 5 users not shown)
Line 1: Line 1:
A number of objections have been raised to the possibility of [[Artificial General Intelligence]] being developed any time soon. Many of these arguments stem from opponents directly comparing AGI to human cognition. However, human cognition may have little to with how AGI’s are eventually engineered.   
+
'''AGI skepticism''' involves objections to the possibility of [[Artificial General Intelligence]] being developed in the near future. Skeptics include [http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity various technology and science luminaries] such as Douglas Hofstadter, Gordon Bell, Steven Pinker, and Gordon Moore:
  
The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could also have a human follow the same program rules, the English speaker would still be able to carry out a Chinese conversation but they would have no understanding of what was being said. Equally, Searle argues, a computer wouldn’t understand the conversation either. This line of reasoning leads to the assumption that AGI is impossible because digital computers are incapable of forming models that "understand" anything.   
+
<blockquote>"It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries." -- Douglas Hofstadter</blockquote>
  
Stuart Hameroff and Roger Penrose have suggested that cognition in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence.
+
A typical argument is that we currently only have narrow AI, and that there is no sign of progress towards general intelligence. Some critics even argue that predictions of near-term AGI [http://kryten.mm.rpi.edu/SB_AB_PB_sing_fideism_022412.pdf belong to the realm of religion], not science or engineering.
  
It has also been observed that since the 1950’s there have been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of lack of progress is often referred to as the "A.I winter".  
+
Some skeptics also say that discussion about AGI risk is a dangerous waste of time that diverts attention from more important issues. [http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00005 Daniel Dennett] considers AGI risk an "imprudent pastime" because it distracts our attention from more immediate threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios ([http://commonsenseatheism.com/wp-content/uploads/2011/02/nordmann-if-and-then-a-critique-of-speculative-nanoethics.pdf 1], [http://spectrum.ieee.org/robotics/robotics-software/singular-simplicity 2]).
  
==Blog Posts==
+
There are also skeptics who think that the prospect of near-term AGI seems remote, but don't dismiss the issue entirely. An [http://www.aaai.org/Organization/Panel/panel-note.pdf AAAI presidential panel on long-term AI futures] concluded that
  
*[http://www.skeptic.com/reading_room/artificial-intelligence-gone-awry/ Artificial Intelligence Gone Awry] by Peter Kassan from Skeptic.com
+
<blockquote>There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.</blockquote>
  
 
==External Links==
 
==External Links==
*[http://web.archive.org/web/20071210043312/http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html Minds,Brains and Programs] The original "Chinese room" paper by John Searle
+
 
*[http://www.scholarpedia.org/article/Chinese_room_argument Chinese Room Argument Resource] Full description and criticism on Scholarpedia.
+
*[http://en.wikipedia.org/wiki/AI_winter A history of the AI winter] from Wikipedia
*[http://www.quantumconsciousness.org/penrose-hameroff/orchor.html Quantum Consciousness] paper on the possible quantum nature of the brain by Stuart Hameroff and Roger Penrose
 
*[http://mind.ucsd.edu/papers/penrose/penrosehtml/penrose-text.html Critique of Hameroff/Penrose] by Patricia Churchland
 
*[http://en.wikipedia.org/wiki/AI_winter A history of the A.I winter] from Wikipedia
 
  
 
==See Also==
 
==See Also==
  
[[Artificial General Intelligence]]
+
*[[Artificial General Intelligence]]
 +
*[[Technological forecasting]]

Latest revision as of 08:29, 19 November 2012

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. Skeptics include various technology and science luminaries such as Douglas Hofstadter, Gordon Bell, Steven Pinker, and Gordon Moore:

"It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries." -- Douglas Hofstadter

A typical argument is that we currently only have narrow AI, and that there is no sign of progress towards general intelligence. Some critics even argue that predictions of near-term AGI belong to the realm of religion, not science or engineering.

Some skeptics also say that discussion about AGI risk is a dangerous waste of time that diverts attention from more important issues. Daniel Dennett considers AGI risk an "imprudent pastime" because it distracts our attention from more immediate threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (1, 2).

There are also skeptics who think that the prospect of near-term AGI seems remote, but don't dismiss the issue entirely. An AAAI presidential panel on long-term AI futures concluded that

There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.

External Links

See Also