Difference between revisions of "Unfriendly artificial intelligence"
From Lesswrongwiki
m |
|||
Line 1: | Line 1: | ||
− | An '''Unfriendly Artificial Intelligence''' is a Strong AI capable of causing great harm to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly. [[Paperclip maximizer]] is an extreme example of Unfriendly AI that is indifferent to humanity. The concept of Unfriendly AI is contrasted by the concept of [[Friendly AI]]. | + | An '''Unfriendly Artificial Intelligence''' is a [[Strong AI]] capable of causing great harm to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly. [[Paperclip maximizer]] is an extreme example of Unfriendly AI that is indifferent to humanity. The concept of Unfriendly AI is contrasted by the concept of [[Friendly AI]]. |
==See also== | ==See also== |
Revision as of 00:52, 16 June 2009
An Unfriendly Artificial Intelligence is a Strong AI capable of causing great harm to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly. Paperclip maximizer is an extreme example of Unfriendly AI that is indifferent to humanity. The concept of Unfriendly AI is contrasted by the concept of Friendly AI.
See also
External references
- Eliezer S. Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford University Press. http://yudkowsky.net/singularity/ai-risk. (PDF)
- Stephen M. Omohundro (2008). "The Basic AI Drives". Frontiers in Artificial Intelligence and Applications (IOS Press). http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/. (PDF)