Difference between revisions of "Eliezer Yudkowsky"
Line 1: | Line 1: | ||
{{wikilink}} | {{wikilink}} | ||
{{afwikilink}} | {{afwikilink}} | ||
− | '''Eliezer Yudkowsky''' is a research fellow of the [[Singularity Institute for Artificial Intelligence]] - which he co-founded in 2001. He is mainly | + | '''Eliezer Yudkowsky''' is a research fellow of the [[Singularity Institute for Artificial Intelligence]] - which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a [[Friendly AI]], such as a reflective decision theory that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The [[Sequences]], long sequences of posts dealing with epistemology, [[AGI]], [[metaethics]], [[rationality]] and so on. |
He has published several articles, including: [http://intelligence.org/files/CognitiveBiases.pdf “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008)], [http://intelligence.org/files/AIRisk.pdf “AI as a Positive and Negative Factor in Global Risk. (2008)]”, [http://intelligence.org/upload/CFAI/index.html "Creating Friendly AI"(2001)], [http://intelligence.org/upload/LOGI//LOGI.pdf "Levels of Organization in General Intelligence" (2002)], [http://intelligence.org/upload/CEV.html "Coherent Extrapolated Volition"(2004)], [http://intelligence.org/upload/TDT-v01o.pdf "Timeless Decision Theory" (2010)] and [http://intelligence.org/upload/complex-value-systems.pdf"Complex Value Systems are Required to Realize Valuable Futures" (2011)]. | He has published several articles, including: [http://intelligence.org/files/CognitiveBiases.pdf “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008)], [http://intelligence.org/files/AIRisk.pdf “AI as a Positive and Negative Factor in Global Risk. (2008)]”, [http://intelligence.org/upload/CFAI/index.html "Creating Friendly AI"(2001)], [http://intelligence.org/upload/LOGI//LOGI.pdf "Levels of Organization in General Intelligence" (2002)], [http://intelligence.org/upload/CEV.html "Coherent Extrapolated Volition"(2004)], [http://intelligence.org/upload/TDT-v01o.pdf "Timeless Decision Theory" (2010)] and [http://intelligence.org/upload/complex-value-systems.pdf"Complex Value Systems are Required to Realize Valuable Futures" (2011)]. |
Revision as of 00:58, 12 September 2012
Eliezer Yudkowsky is a research fellow of the Singularity Institute for Artificial Intelligence - which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a Friendly AI, such as a reflective decision theory that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on.
He has published several articles, including: “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008), “AI as a Positive and Negative Factor in Global Risk. (2008)”, "Creating Friendly AI"(2001), "Levels of Organization in General Intelligence" (2002), "Coherent Extrapolated Volition"(2004), "Timeless Decision Theory" (2010) and "Complex Value Systems are Required to Realize Valuable Futures" (2011).
Links
- Eliezer Yudkowsky's user page at Less Wrong
- A list of all of Yudkowsky's posts to Overcoming Bias, Dependency graphs for them
- Eliezer Yudkowsky Facts by steven0461