Difference between revisions of "Artificial general intelligence"

From Lesswrongwiki
Jump to: navigation, search
(Added reasons for expecting AGI, some editing)
Line 1: Line 1:
 
{{wikilink|Strong AI}}
 
{{wikilink|Strong AI}}
 
{{afwikilink}}
 
{{afwikilink}}
An '''Artificial general intelligence''', or '''AGI''', is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to ''narrow AI'', systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains.
+
An '''Artificial general intelligence''', or '''AGI''', is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to ''narrow AI'', systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this in no way suggests they are generally intelligent.
  
Directly comparing the performance of AI to human performance is often an instance of [[anthropomorphism]]. The internal workings of an AI [[Mind design space|need not resemble]] those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human. On the other hand, today's electronic calculators far exceed the ability of humans to calculate, but this observation in no way suggests that calculators are generally intelligent.
+
Directly comparing the performance of AI to human performance is often an instance of [[anthropomorphism]]. The internal workings of an AI [[Mind design space|need not resemble]] those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human.  
  
The [[utility function|values]] of an AGI could also be [[alien values|distinctly alien]] to those of humans, in which case it won't see many human activities as worthwhile and would have [[Giant cheesecake fallacy|no intention]] of exceeding human performance (according to the human valuation of performance).
+
The [[utility function|values]] of an AGI could also be [[alien values|distinctly alien]] to those of humans, in which case it won't see many human activities as worthwhile and would have [[Giant cheesecake fallacy|no intention]] of exceeding human performance (according to the human valuation of performance). Comparing an AGI's preferences to those of humans, AGI are classified as [[Friendly artificial intelligence|Friendly]] and [[Unfriendly artificial intelligence|Unfriendly]]. An Unfriendly AGI would pose a large [[existential risk]].
  
Comparing an AGI's preferences to those of humans, AGI are classified as [[Friendly artificial intelligence|Friendly]] and [[Unfriendly artificial intelligence|Unfriendly]].
+
Reasons for expecting an AGI's creation in the near future include the continuation of [[Moore's law]], larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey taken at a 2011 [[Future of Humanity Institute]] conference suggested a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150.  
  
 
==Blog posts==
 
==Blog posts==
Line 13: Line 13:
 
*[http://lesswrong.com/lw/wk/artificial_mysterious_intelligence/ Artificial Mysterious Intelligence]
 
*[http://lesswrong.com/lw/wk/artificial_mysterious_intelligence/ Artificial Mysterious Intelligence]
 
*[http://lesswrong.com/lw/8a9/agi_quotes/ AGI Quotes] by [http://lukeprog lukeprog]
 
*[http://lesswrong.com/lw/8a9/agi_quotes/ AGI Quotes] by [http://lukeprog lukeprog]
 +
 +
== References ==
 +
*[http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0015/21516/MI_survey.pdf Machine Intelligence Survey] by Anders Sandberg and Nick Bostrom
 +
*[http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf Intelligence Explosion: Evidence and Import] by Luke Muehlhauser and Anna Salamon
  
 
==See also==
 
==See also==
 
+
*[[AGI Skepticism]]
 
*[[Friendly AI]]
 
*[[Friendly AI]]
 
*[[Unfriendly AI]], [[paperclip maximizer]]
 
*[[Unfriendly AI]], [[paperclip maximizer]]

Revision as of 02:42, 12 July 2012

Smallwikipedialogo.png
Wikipedia has an article about
Smallafwikilogo.png
The Transhumanist Wiki has an article about

An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this in no way suggests they are generally intelligent.

Directly comparing the performance of AI to human performance is often an instance of anthropomorphism. The internal workings of an AI need not resemble those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human.

The values of an AGI could also be distinctly alien to those of humans, in which case it won't see many human activities as worthwhile and would have no intention of exceeding human performance (according to the human valuation of performance). Comparing an AGI's preferences to those of humans, AGI are classified as Friendly and Unfriendly. An Unfriendly AGI would pose a large existential risk.

Reasons for expecting an AGI's creation in the near future include the continuation of Moore's law, larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey taken at a 2011 Future of Humanity Institute conference suggested a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150.

Blog posts

References

See also