Difference between revisions of "Unfriendly artificial intelligence"

From Lesswrongwiki
Jump to: navigation, search
m
m (moved UnFriendly artificial intelligence to Unfriendly artificial intelligence over redirect: The usage seems to have settled on "Unfriendly", not "UnFrinedly".)
(No difference)

Revision as of 08:05, 31 December 2011

An unFriendly artificial intelligence is an artificial general intelligence capable of causing great harm to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be unFriendly; in fact, almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal. A paperclip maximizer is often imagined as an illustrative example of an unFriendly AI indifferent to humanity. An AGI specifically designed to have a positive effect on humanity is called a Friendly AI.

See also

References