Difference between revisions of "Artificial general intelligence"

From Lesswrongwiki
Jump to: navigation, search
(revision)
 
(6 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
{{wikilink|Strong AI}}
 
{{wikilink|Strong AI}}
 
{{afwikilink}}
 
{{afwikilink}}
An '''Artificial general intelligence''', or '''AGI''', is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to ''narrow AI'', systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains.
+
An '''Artificial general intelligence''', or '''AGI''', is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to ''narrow AI'', systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this does not mean that they are generally intelligent, as they have little ability to invent new problem-solving techniques, and their abilities are targeted in narrow domains.
  
Directly comparing the performance of AI to human performance is often an instance of [[anthropomorphism]]. The internal workings of an AI [[Mind design space|need not resemble]] those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human. On the other hand, today's electronic calculators far exceed the ability of humans to calculate, but this observation in no way suggests that calculators are generally intelligent.
+
==AGIs and Humans==
 +
Directly comparing the performance of AI to human performance is often an instance of [[anthropomorphism]]. The internal workings of an AI [[Mind design space|need not resemble]] those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human.  
  
The [[values]] of an AGI could also be [[paperclip maximizer|distinctly alien]] to those of humans, in which case it won't see many human activities as worthwhile and would have [[Giant cheesecake fallacy|no intention]] of exceeding human performance (according to the human valuation of performance).
+
The [[utility function|values]] of an AGI could also be [[alien values|distinctly alien]] to those of humans, in which case it won't see many human activities as worthwhile and would have [[Giant cheesecake fallacy|no intention]] of exceeding human performance (according to the human valuation of performance). Comparing an AGI's preferences to those of humans, AGI are classified as [[Friendly artificial intelligence|Friendly]] and [[Unfriendly artificial intelligence|Unfriendly]]. An Unfriendly AGI would pose a large [[existential risk]].
  
Comparing the AGI's preferences to those of humans, AGI are classified as [[Friendly artificial intelligence|Friendly]] and [[UnFriendly artificial intelligence|unFriendly]].
+
=="AGI" as a design paradigm==
 +
The term "Artificial General Intelligence," [http://wp.goertzel.org/?p=173 introduced by Shane Legg and Mark Gubrud], is often used to refer more specifically to a design paradigm which mixes modules of different types: "neat" and "scruffy", symbolic and subsymbolic. [[Ben Goertzel]] is the researcher most commonly associated with this approach, but others, including [[Peter Voss]], are also pursuing it. This design paradigm, though eclectic in adopting various techniques,  stands in contrast to other approaches to creating new kinds of artificial general intelligence (in the broader sense), including brain emulation, artificial evolution, Global Brain, and pure "neat" or "scruffy" AI.
 +
 
 +
==Expected dates for the creation of AGI==
 +
Reasons for expecting an AGI's creation in the near future include the continuation of [[Moore's law]], larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey of experts taken at a 2011 [[Future of Humanity Institute]] conference on machine intelligence found a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150. A significant minority of the AGI community views the prospects of an [[intelligence explosion]] or the loss of control over an AGI [[AGI skepticism|very skeptically]] however.
 +
 
 +
==Blog posts==
 +
 
 +
*[http://lesswrong.com/lw/wk/artificial_mysterious_intelligence/ Artificial Mysterious Intelligence]
 +
*[http://lesswrong.com/lw/8a9/agi_quotes/ AGI Quotes] by [http://lukeprog lukeprog]
 +
 
 +
== References ==
 +
*[http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0015/21516/MI_survey.pdf Machine Intelligence Survey] by Anders Sandberg and Nick Bostrom
 +
*[http://sethbaum.com/ac/2011_AI-Experts.pdf How Long Until Human-Level AI? Results from an Expert Assessment], survey at AGI-09 by Seth D. Baum, Ben Goertzel, and Ted G. Goertzel
 +
*[http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf Intelligence Explosion: Evidence and Import] by Luke Muehlhauser and Anna Salamon
 +
*[http://hplusmagazine.com/2011/01/27/pei-wang-path-artificial-general-intelligence/ Pei Wang on the Path to Artificial General Intelligence] by Ben Goertzel
 +
*[http://www.aaai.org/Organization/Panel/panel-note.pdf Interim Report from the Panel Chairs], AAAI
  
 
==See also==
 
==See also==
 
+
*[[AGI skepticism]]
 
*[[Friendly AI]]
 
*[[Friendly AI]]
*[[Unfriendly AI]]
+
*[[Unfriendly AI]], [[paperclip maximizer]]
*[[Intelligence explosion]]
+
*[[Really powerful optimization process]]
 +
*[[Intelligence explosion]], [[technological singularity]]
 +
*[[Mind design space]]
 
*[[Singleton]]
 
*[[Singleton]]
 +
*[[Anthropomorphism]], [[giant cheesecake fallacy]]
  
 
[[Category:Concepts]]
 
[[Category:Concepts]]
 
[[Category:Future]]
 
[[Category:Future]]
 
[[Category:AI]]
 
[[Category:AI]]
 +
[[Category:AGI]]

Latest revision as of 02:42, 22 June 2017

Smallwikipedialogo.png
Wikipedia has an article about
Smallafwikilogo.png
The Transhumanist Wiki has an article about

An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this does not mean that they are generally intelligent, as they have little ability to invent new problem-solving techniques, and their abilities are targeted in narrow domains.

AGIs and Humans

Directly comparing the performance of AI to human performance is often an instance of anthropomorphism. The internal workings of an AI need not resemble those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human.

The values of an AGI could also be distinctly alien to those of humans, in which case it won't see many human activities as worthwhile and would have no intention of exceeding human performance (according to the human valuation of performance). Comparing an AGI's preferences to those of humans, AGI are classified as Friendly and Unfriendly. An Unfriendly AGI would pose a large existential risk.

"AGI" as a design paradigm

The term "Artificial General Intelligence," introduced by Shane Legg and Mark Gubrud, is often used to refer more specifically to a design paradigm which mixes modules of different types: "neat" and "scruffy", symbolic and subsymbolic. Ben Goertzel is the researcher most commonly associated with this approach, but others, including Peter Voss, are also pursuing it. This design paradigm, though eclectic in adopting various techniques, stands in contrast to other approaches to creating new kinds of artificial general intelligence (in the broader sense), including brain emulation, artificial evolution, Global Brain, and pure "neat" or "scruffy" AI.

Expected dates for the creation of AGI

Reasons for expecting an AGI's creation in the near future include the continuation of Moore's law, larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey of experts taken at a 2011 Future of Humanity Institute conference on machine intelligence found a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150. A significant minority of the AGI community views the prospects of an intelligence explosion or the loss of control over an AGI very skeptically however.

Blog posts

References

See also