Eliezer Yudkowsky is a research fellow of the Singularity Institute for Artificial Intelligence - which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a Friendly AI, such as a reflective decision theory that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on.
He has published several articles, including:
- “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008): A pioneer compilation of Cognitive Biases – systematic deviations from rationality – influencing our judgment of risks with the potential to inﬂict serious damage to human well-being on a global scale, threaten could kill more than millions of human lives called Global Catastrophic Risks (e.g.: volcanic eruptions, pandemic infections, nuclear accidents, worldwide tyrannies, out-of-control scientific experiments, climatic changes, cosmic hazards and economic collapse). It is a book chapter from a larger tome analyzing those risks.
- “AI as a Positive and Negative Factor in Global Risk. (2008)”: Chapter of the same book of the previous paper, it analysis possible philosophical and technical failures in the construction of a Friendly AI leading to an Unfrendily AI posing a enormous global risk. He also discusses how a Friendly AI could help decreasing the others Global Risks discussed in the book. Finally, because a powerful AI could go from been a Global Risk to help reduce others risks, he argues that researching such topic is extremely important.
- "Creating Friendly AI"(2001): One of the first articles to address the challenges in designing the features and cognitive architecture required to produce a benevolent - "Friendly" - Artificial Intelligence . It also gives one of the first more precise definitions of terms such as Friendly AI and Seed AI.
- "Levels of Organization in General Intelligence" (2002): Analysis AGI through its decomposition in five subsystems, successive levels of functional organization: Code, sensory modalities, concepts, thoughts, and deliberation. It also discusses some advantages artificial minds would have, such as the possibility of Recursive self-improvement.
- "Coherent Extrapolated Volition"(2004): Presents the difficulties and possible solutions for incorporating friendliness into an AGI. It proposes that making an AGI doing what we tell it to could be dangerous, since we don`t know what we want. Instead we should program the AGI to do what we want, predicting what the vectorial sum of an idealized version of us would want, "if we knew more, thought faster, were more the people we wished we were, had grown up farther together”. He calls this the coherent extrapolated volition of humankind, or CEV.
- "Timeless Decision Theory" (2010): Describes Timeless Decision Theory,” an extension of causal decision networks that compactly represents uncertainty about correlated computational processes and represents the decision maker as such a process”. It solves many problems which Causal Decision Theory doesn`t have a plausible solution: Newcomb’s Problem, Solomon’s Problems and Prisoner’s Dilemma.
- "Complex Value Systems are Required to Realize Valuable Futures" (2011): Discusses the Complexity of values: we can’t come up with a simple rule or description that sums up all human values. It analyses how this problem makes it difficult to build a valuable future.
- Eliezer Yudkowsky's user page at Less Wrong
- A list of all of Yudkowsky's posts to Overcoming Bias, Dependency graphs for them
- Eliezer Yudkowsky Facts by steven0461