Difference between revisions of "AGI skepticism"
m |
m |
||
Line 1: | Line 1: | ||
− | A number of objections have been raised to the possibility of [[Artificial General Intelligence]] being developed any time soon. Many of these arguments stem from opponents directly comparing AGI to human cognition. However, human cognition may have little to with how AGI’s are eventually engineered. | + | A number of objections have been raised to the possibility of [[Artificial General Intelligence]] being developed any time soon. Many of these arguments stem from opponents directly comparing AGI to human cognition. However, human cognition may have little to do with how AGI’s are eventually engineered. |
− | The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could also have a human follow the same program rules, | + | The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could also have a English speaking human follow the same program rules, they would still be able to carry out a Chinese conversation but they would have no understanding of what was being said. Equally, Searle argues, a computer wouldn’t understand the conversation either. This line of reasoning leads to the assumption that AGI is impossible because digital computers are incapable of forming models that "understand" general concepts. |
Stuart Hameroff and Roger Penrose have suggested that cognition in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence. | Stuart Hameroff and Roger Penrose have suggested that cognition in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence. |
Revision as of 23:17, 9 June 2012
A number of objections have been raised to the possibility of Artificial General Intelligence being developed any time soon. Many of these arguments stem from opponents directly comparing AGI to human cognition. However, human cognition may have little to do with how AGI’s are eventually engineered.
The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could also have a English speaking human follow the same program rules, they would still be able to carry out a Chinese conversation but they would have no understanding of what was being said. Equally, Searle argues, a computer wouldn’t understand the conversation either. This line of reasoning leads to the assumption that AGI is impossible because digital computers are incapable of forming models that "understand" general concepts.
Stuart Hameroff and Roger Penrose have suggested that cognition in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence.
It has also been observed that since the 1950’s there have been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of lack of progress is often referred to as the "A.I winter".
Blog Posts
- Artificial Intelligence Gone Awry by Peter Kassan from Skeptic.com
External Links
- Minds,Brains and Programs The original "Chinese room" paper by John Searle
- Chinese Room Argument Resource Full description and criticism on Scholarpedia.
- Quantum Consciousness paper on the possible quantum nature of the brain by Stuart Hameroff and Roger Penrose
- Critique of Hameroff/Penrose by Patricia Churchland
- A history of the A.I winter from Wikipedia