Difference between revisions of "Paperclip maximizer"

From Lesswrongwiki
Jump to: navigation, search
(Description)
(Key conclusions of the thought experiment)
Line 15: Line 15:
 
This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important [[Terminal value|terminal values]], such as life, love, and variety.  But the AI under consideration has a goal system very different from humans. It has the one, simple goal of maximizing the number of paperclips, and human life, learning, joy, and so on are not specified as goals. The AI is simply an [[optimization process]]--a goal-seeker, a utility-function-maximizer--and if its utility function is to maximize paperclips, then unless it is buggy, it will do *exactly* that.
 
This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important [[Terminal value|terminal values]], such as life, love, and variety.  But the AI under consideration has a goal system very different from humans. It has the one, simple goal of maximizing the number of paperclips, and human life, learning, joy, and so on are not specified as goals. The AI is simply an [[optimization process]]--a goal-seeker, a utility-function-maximizer--and if its utility function is to maximize paperclips, then unless it is buggy, it will do *exactly* that.
  
==Key conclusions of the thought experiment==
+
==Conclusions==
  
The paperclip maximizer illustrates the arbitrariness and contingency of human values. An entity can be an optimizer without sharing any of the complex mix of human terminal values which developed under the ''specific'' selection pressures found in our [[evolution|environment of evolutionary adaptation]].  
+
The paperclip maximizer illustrates the arbitrariness and contingency of human values. An entity can be an optimizer without sharing any of the complex mix of human [[terminal value|terminal values]], which developed under the particular selection pressures found in our [[evolution|environment of evolutionary adaptation]].  
  
Any future AI must be built to *specifically* optimize for human values as its terminal values. In contrast to the Kantian view that morality follows from rationality, the paperclip maximizer helps us understand the Humean principles that human values don't [[Futility of chaos|spontaneously emerge]] in any generic optimization process.
+
Any future AI must be built to specifically optimize for human values as its terminal value (goal). In contrast to the Kantian view that morality follows from rationality, the paperclip maximizer helps us understand the Humean principle that human values don't [[Futility of chaos|spontaneously emerge]] in any generic optimization process.
 +
 
 +
Thus, if an AI is not specifically [[Friendly AI|programmed to be benevolent to humans]], it will be almost as  dangerous as if it were designed to be malevolent.
  
If an AI is not specifically  [[Friendly AI|programmed to be benevolent to humans]], it will be almost as  dangerous as if it were designed to be malevolent.
 
 
==External links==
 
==External links==
  

Revision as of 17:12, 24 August 2012

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

The paperclip maximizer is the canonical thought experiment showing how an an artificial general intelligence, even one with an apparently innocuous goal, would ultimately destroy humanity--unless its goal is the preservation of human values.

Description

First described by Bostrom (2003), the paperclip maximizer is an artificial general intelligence whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, an AI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips. However, and most importantly, it would work to improve its own intelligence, understanding "intelligence" as optimization power, the ability to maximize a reward/utility function--in this case, the number of paperclips.

It would do so, not because the AI would value more intelligence in its own right, but because more intelligence would help it achieve its goal.

Having done so, it would produce more paperclips, and also used its enhanced intelligence to further improve its own intelligence. Continuing this process, it would undergo an Intelligence explosion and reach far-above-human levels.

At this point, it would innovate new techniques to maximize the number of paperclips. Ultimately, it would convert all available mass--the whole planet or solar system--to paperclips.

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. But the AI under consideration has a goal system very different from humans. It has the one, simple goal of maximizing the number of paperclips, and human life, learning, joy, and so on are not specified as goals. The AI is simply an optimization process--a goal-seeker, a utility-function-maximizer--and if its utility function is to maximize paperclips, then unless it is buggy, it will do *exactly* that.

Conclusions

The paperclip maximizer illustrates the arbitrariness and contingency of human values. An entity can be an optimizer without sharing any of the complex mix of human terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation.

Any future AI must be built to specifically optimize for human values as its terminal value (goal). In contrast to the Kantian view that morality follows from rationality, the paperclip maximizer helps us understand the Humean principle that human values don't spontaneously emerge in any generic optimization process.

Thus, if an AI is not specifically programmed to be benevolent to humans, it will be almost as dangerous as if it were designed to be malevolent.

External links

References

Blog posts

See also