Eliezer Yudkowsky

From Lesswrongwiki
Revision as of 00:58, 12 September 2012 by Joaolkf (talk | contribs)
Jump to: navigation, search
Smallwikipedialogo.png
Wikipedia has an article about
Smallafwikilogo.png
The Transhumanist Wiki has an article about

Eliezer Yudkowsky is a research fellow of the Singularity Institute for Artificial Intelligence - which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a Friendly AI, such as a reflective decision theory that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on.

He has published several articles, including: “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008), “AI as a Positive and Negative Factor in Global Risk. (2008)”, "Creating Friendly AI"(2001), "Levels of Organization in General Intelligence" (2002), "Coherent Extrapolated Volition"(2004), "Timeless Decision Theory" (2010) and "Complex Value Systems are Required to Realize Valuable Futures" (2011).

Links