Difference between revisions of "Artificial general intelligence"

From Lesswrongwiki
Jump to: navigation, search
(Added reasons for expecting AGI, some editing)
m (See also)
Line 19: Line 19:
==See also==
==See also==
*[[AGI Skepticism]]
*[[AGI skepticism]]
*[[Friendly AI]]
*[[Friendly AI]]
*[[Unfriendly AI]], [[paperclip maximizer]]
*[[Unfriendly AI]], [[paperclip maximizer]]

Revision as of 01:42, 12 July 2012

Wikipedia has an article about
The Transhumanist Wiki has an article about

An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this in no way suggests they are generally intelligent.

Directly comparing the performance of AI to human performance is often an instance of anthropomorphism. The internal workings of an AI need not resemble those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human.

The values of an AGI could also be distinctly alien to those of humans, in which case it won't see many human activities as worthwhile and would have no intention of exceeding human performance (according to the human valuation of performance). Comparing an AGI's preferences to those of humans, AGI are classified as Friendly and Unfriendly. An Unfriendly AGI would pose a large existential risk.

Reasons for expecting an AGI's creation in the near future include the continuation of Moore's law, larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey taken at a 2011 Future of Humanity Institute conference suggested a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150.

Blog posts


See also