Paperclip maximizer

From Lesswrongwiki
Revision as of 15:17, 2 September 2009 by Z. M. Davis (talk | contribs) (rewriting; quote)
Jump to: navigation, search

A paperclip maximizer is a hypothetical artificial intelligence (really powerful optimization process) that maximizes the expected number of paperclips in the universe. A superintelligent paperclip maximizer would be expected to rapidly invent nanotechnological infrustructure and convert its future light cone into paperclips. The purpose of the thought experiment is to illustrate that artificial intelligences need not have human-like goals, and that AIs that haven't been specifically programmed to be benevolent to humans are likely to result in existential disaster.

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

A paperclip maximizer, although not explicitly malicious to humans, is nevertheless only slightly less dangerous than if it was. It does not "realize" that all life is precious, despite its intelligence. It sees life the same way it sees everything else that is made of atoms -- as raw material for paperclips. The non-humane values that the paperclip maximizer holds make it an example of Unfriendly AI.

See also

Blog posts