Difference between revisions of "Paperclip maximizer"

From Lesswrongwiki
Jump to: navigation, search
m
(reverting edits)
Line 1: Line 1:
 
{{Quote|
 
{{Quote|
The AI does not hate you, nor does it love you, but you are made out of atoms.
+
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
 
|Eliezer Yudkowsky|[http://yudkowsky.net/singularity/ai-risk Artificial Intelligence as a Positive and Negative Factor in Global Risk]}}
 
|Eliezer Yudkowsky|[http://yudkowsky.net/singularity/ai-risk Artificial Intelligence as a Positive and Negative Factor in Global Risk]}}
  
{{Quote|It might fill up the universe with styrofoam or something because it has some wrong ideas about how the cosmos needs a shock absorber.|[http://www.j-paine.org/dobbs/consciousness_is_not_a_window.html Marvin Minsky]}}
+
A '''paperclip maximizer''' is a hypothetical artificial intelligence ([[really powerful optimization process]]) that maximizes the expected number of paperclips in the universe. A superintelligent paperclip maximizer would be expected to rapidly invent nanotechnological infrustructure and convert its future [[light cone]] into paperclips. The purpose of the thought experiment is to illustrate that artificial intelligences [[alien values|need not have human-like goals]], and that AIs that haven't been [[Friendly AI|specifically programmed to be benevolent to humans]] are almost certain to result in [[existential risk|existential disaster]].
  
A '''paperclip maximizer''' is an agent that desires to fill the universe with as many paperclips as possible. The use of paperclips in this example is not important, but serves as a stand-in for any values that are not merely [[alien values|alien]] and unlike [[human universal|human values]], but seem, to a human, to be arbitrary and worthless. It is usually assumed to be a [[Strong AI|superintelligent AI]] so [[singleton|powerful]] that the outcome for the world overwhelmingly depends on its goals, and little else.
+
A paperclip maximizer, although not explicitly malicious to humans, is nevertheless only slightly less dangerous than if it was.  It does not "realize" that all life is precious, despite its intelligence, because the notion that life is precious is specific to particular philosophies held by human beings, who have an adapted moral architecture resulting from ''specific'' selection pressures acting over millions of years of evolutionary time. These values don't spontaneously emerge in any generic [[optimization process]]; [[mind design space]] is too big for that. A paperclip maximizer sees life the same way it sees everything else that is made of atoms -- as raw material for paperclips. The non-humane values that the paperclip maximizer holds make it an example of [[Unfriendly AI]].
  
A paperclip maximizer, although not explicitly malicious to humans, is nevertheless only slightly less dangerous than if it was. Contrary to what [[moral realism|some]] may think, it does not "realize" that all life is precious, despite its intelligence. It sees life the same way it sees everything else that is made of atoms -- as raw material for paperclips. The nonhuman values that the paperclip maximizer holds make it an example of [[Unfriendly AI]].
 
  
 
==See also==
 
==See also==

Revision as of 09:36, 4 September 2009

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

A paperclip maximizer is a hypothetical artificial intelligence (really powerful optimization process) that maximizes the expected number of paperclips in the universe. A superintelligent paperclip maximizer would be expected to rapidly invent nanotechnological infrustructure and convert its future light cone into paperclips. The purpose of the thought experiment is to illustrate that artificial intelligences need not have human-like goals, and that AIs that haven't been specifically programmed to be benevolent to humans are almost certain to result in existential disaster.

A paperclip maximizer, although not explicitly malicious to humans, is nevertheless only slightly less dangerous than if it was. It does not "realize" that all life is precious, despite its intelligence, because the notion that life is precious is specific to particular philosophies held by human beings, who have an adapted moral architecture resulting from specific selection pressures acting over millions of years of evolutionary time. These values don't spontaneously emerge in any generic optimization process; mind design space is too big for that. A paperclip maximizer sees life the same way it sees everything else that is made of atoms -- as raw material for paperclips. The non-humane values that the paperclip maximizer holds make it an example of Unfriendly AI.


See also

Blog posts