Difference between revisions of "Artificial general intelligence"

From Lesswrongwiki
Jump to: navigation, search
(revision)
m
Line 7: Line 7:
 
The [[values]] of an AGI could also be [[paperclip maximizer|distinctly alien]] to those of humans, in which case it won't see many human activities as worthwhile and would have [[Giant cheesecake fallacy|no intention]] of exceeding human performance (according to the human valuation of performance).
 
The [[values]] of an AGI could also be [[paperclip maximizer|distinctly alien]] to those of humans, in which case it won't see many human activities as worthwhile and would have [[Giant cheesecake fallacy|no intention]] of exceeding human performance (according to the human valuation of performance).
  
Comparing the AGI's preferences to those of humans, AGI are classified as [[Friendly artificial intelligence|Friendly]] and [[UnFriendly artificial intelligence|unFriendly]].
+
Comparing the AGI's preferences to those of humans, AGI are classified as [[Friendly artificial intelligence|Friendly]] and [[Unfriendly artificial intelligence|Unfriendly]].
  
 
==See also==
 
==See also==

Revision as of 08:09, 31 December 2011

Smallwikipedialogo.png
Wikipedia has an article about
Smallafwikilogo.png
The Transhumanist Wiki has an article about

An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains.

Directly comparing the performance of AI to human performance is often an instance of anthropomorphism. The internal workings of an AI need not resemble those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human. On the other hand, today's electronic calculators far exceed the ability of humans to calculate, but this observation in no way suggests that calculators are generally intelligent.

The values of an AGI could also be distinctly alien to those of humans, in which case it won't see many human activities as worthwhile and would have no intention of exceeding human performance (according to the human valuation of performance).

Comparing the AGI's preferences to those of humans, AGI are classified as Friendly and Unfriendly.

See also