Difference between revisions of "Eliezer Yudkowsky"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
 
{{wikilink}}
 
{{wikilink}}
 
{{afwikilink}}
 
{{afwikilink}}
'''Eliezer Yudkowsky''' is a research fellow of the [[Singularity Institute for Artificial Intelligence]] - which he co-founded in 2001. He is mainly concern with the obstacles and importance of developing a [[Friendly AI]] and  a reflective decision theory - a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code.  He also co-founded Less Wrong, writing most part of The [[Sequences]], long sequences of posts dealing with on epistemology, [[AGI]], [[metaethics]], [[rationality]] and so on.
+
'''Eliezer Yudkowsky''' is a research fellow of the [[Singularity Institute for Artificial Intelligence]] - which he co-founded in 2001. He is mainly concern with the obstacles and importance of developing a [[Friendly AI]] and  a reflective decision theory - a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code.  He also co-founded Less Wrong, writing most part of The [[Sequences]], long sequences of posts dealing with epistemology, [[AGI]], [[metaethics]], [[rationality]] and so on.
  
 
He has published several articles, including:  [http://intelligence.org/files/CognitiveBiases.pdf “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008)], [http://intelligence.org/files/AIRisk.pdf “AI as a Positive and Negative Factor in Global Risk. (2008)]”, [http://intelligence.org/upload/CFAI/index.html "Creating Friendly AI"(2001)], [http://intelligence.org/upload/LOGI//LOGI.pdf "Levels of Organization in General Intelligence" (2002)], [http://intelligence.org/upload/CEV.html "Coherent Extrapolated Volition"(2004)], [http://intelligence.org/upload/TDT-v01o.pdf "Timeless Decision Theory" (2010)] and [http://intelligence.org/upload/complex-value-systems.pdf"Complex Value Systems are Required to Realize Valuable Futures" (2011)].
 
He has published several articles, including:  [http://intelligence.org/files/CognitiveBiases.pdf “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008)], [http://intelligence.org/files/AIRisk.pdf “AI as a Positive and Negative Factor in Global Risk. (2008)]”, [http://intelligence.org/upload/CFAI/index.html "Creating Friendly AI"(2001)], [http://intelligence.org/upload/LOGI//LOGI.pdf "Levels of Organization in General Intelligence" (2002)], [http://intelligence.org/upload/CEV.html "Coherent Extrapolated Volition"(2004)], [http://intelligence.org/upload/TDT-v01o.pdf "Timeless Decision Theory" (2010)] and [http://intelligence.org/upload/complex-value-systems.pdf"Complex Value Systems are Required to Realize Valuable Futures" (2011)].

Revision as of 04:45, 10 September 2012

Smallwikipedialogo.png
Wikipedia has an article about
Smallafwikilogo.png
The Transhumanist Wiki has an article about

Eliezer Yudkowsky is a research fellow of the Singularity Institute for Artificial Intelligence - which he co-founded in 2001. He is mainly concern with the obstacles and importance of developing a Friendly AI and a reflective decision theory - a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on.

He has published several articles, including: “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008), “AI as a Positive and Negative Factor in Global Risk. (2008)”, "Creating Friendly AI"(2001), "Levels of Organization in General Intelligence" (2002), "Coherent Extrapolated Volition"(2004), "Timeless Decision Theory" (2010) and "Complex Value Systems are Required to Realize Valuable Futures" (2011).

Links