Difference between revisions of "Friendly artificial intelligence"
From Lesswrongwiki
m |
(→External references) |
||
Line 11: | Line 11: | ||
==External references== | ==External references== | ||
+ | *{{cite book | ||
+ | |chapter=Artificial Intelligence as a Positive and Negative Factor in Global Risk | ||
+ | |author=Eliezer S. Yudkowsky | ||
+ | |year=2008 | ||
+ | |title=Global Catastrophic Risks | ||
+ | |publisher=Oxford University Press | ||
+ | |url=http://yudkowsky.net/singularity/ai-risk}} ([http://intelligence.org/AIRisk.pdf PDF]) | ||
*[http://intelligence.org/upload/CFAI/ Creating Friendly AI] by [[Eliezer Yudkowsky]] | *[http://intelligence.org/upload/CFAI/ Creating Friendly AI] by [[Eliezer Yudkowsky]] | ||
[[Category:Concepts]] | [[Category:Concepts]] |
Revision as of 00:59, 16 June 2009
A Friendly Artificial Intelligence or FAI is an artificial intelligence (AI) that has a positive rather than negative effect on humanity. Friendly AI also refers to the field of knowledge required to build such an AI.
See also
Blog posts
External references
- Eliezer S. Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford University Press. http://yudkowsky.net/singularity/ai-risk. (PDF)
- Creating Friendly AI by Eliezer Yudkowsky