Difference between revisions of "Unfriendly artificial intelligence"

From Lesswrongwiki
Jump to: navigation, search
m
 
(5 intermediate revisions by 4 users not shown)
Line 1: Line 1:
An '''Unfriendly artificial intelligence''' (or '''UFAI''') is an artificial general intelligence capable of causing [[existential risk|great harm]] to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be unFriendly; in fact, almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal. A [[paperclip maximizer]] is often imagined as an illustrative example of an unFriendly AI indifferent to humanity. An AGI specifically designed to have a positive effect on humanity is called a [[Friendly AI]].
+
An '''Unfriendly artificial intelligence''' (or '''UFAI''') is an [[artificial general intelligence]] capable of causing [[existential risk|great harm]] to humanity, and having goals that [[Instrumental values|make it useful]] for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly; there are [[Basic AI drives|strong reasons]] to expect that almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal. A [[paperclip maximizer]] is often imagined as an illustrative example of an unFriendly AI indifferent to humanity. An AGI specifically designed to have a positive effect on humanity is called a [[Friendly AI]].
  
 
==See also==
 
==See also==
  
*[[Mind design space]], [[Magical categories]]
+
*[[Mind design space]], [[magical categories]]
 +
*[[Really powerful optimization process]]
 
*[[Basic AI drives]]
 
*[[Basic AI drives]]
 
*[[Paperclip maximizer]]
 
*[[Paperclip maximizer]]
Line 17: Line 18:
 
|title=Global Catastrophic Risks
 
|title=Global Catastrophic Risks
 
|publisher=Oxford University Press
 
|publisher=Oxford University Press
|url=http://yudkowsky.net/singularity/ai-risk}} ([http://intelligence.org/AIRisk.pdf PDF])
+
|url=http://yudkowsky.net/singularity/ai-risk}} ([http://intelligence.org/files/AIPosNegFactor.pdf PDF])
 
*{{cite journal
 
*{{cite journal
 
|title=The Basic AI Drives
 
|title=The Basic AI Drives
Line 31: Line 32:
 
[[Category:AI]]
 
[[Category:AI]]
 
[[Category:Existential risk]]
 
[[Category:Existential risk]]
 +
[[Category:AI safety]]
 +
[[Category:AGI]]

Latest revision as of 02:42, 22 June 2017

An Unfriendly artificial intelligence (or UFAI) is an artificial general intelligence capable of causing great harm to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly; there are strong reasons to expect that almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal. A paperclip maximizer is often imagined as an illustrative example of an unFriendly AI indifferent to humanity. An AGI specifically designed to have a positive effect on humanity is called a Friendly AI.

See also

References