Difference between revisions of "Unfriendly artificial intelligence"
|Line 33:||Line 33:|
Latest revision as of 01:42, 22 June 2017
An Unfriendly artificial intelligence (or UFAI) is an artificial general intelligence capable of causing great harm to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly; there are strong reasons to expect that almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal. A paperclip maximizer is often imagined as an illustrative example of an unFriendly AI indifferent to humanity. An AGI specifically designed to have a positive effect on humanity is called a Friendly AI.
- Mind design space, magical categories
- Really powerful optimization process
- Basic AI drives
- Paperclip maximizer
- Existential risk
- Friendly AI
- Eliezer S. Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford University Press. http://yudkowsky.net/singularity/ai-risk. (PDF)
- Stephen M. Omohundro (2008). "The Basic AI Drives". Frontiers in Artificial Intelligence and Applications (IOS Press). http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/. (PDF)