Difference between revisions of "The Hanson-Yudkowsky AI-Foom Debate"

From Lesswrongwiki
Jump to: navigation, search
m (referenced "FOOM" in the description)
Line 18: Line 18:
  
 
* [http://www.overcomingbias.com/2008/11/ai-go-foom.html AI Go Foom] by [[Robin Hanson]]
 
* [http://www.overcomingbias.com/2008/11/ai-go-foom.html AI Go Foom] by [[Robin Hanson]]
 +
* [http://lesswrong.com/lw/rk/optimization_and_the_singularity/ Optimization and the Singularity] by [[Eliezer Yudkowsky]]
 +
* [http://www.overcomingbias.com/2008/06/eliezers-meta-l.html Eliezer's Meta-Level Determinismi] by [[Robin Hanson]]
 
* [http://lesswrong.com/lw/w2/observing_optimization/ Observing Optimization] by [[Eliezer Yudkowsky]]
 
* [http://lesswrong.com/lw/w2/observing_optimization/ Observing Optimization] by [[Eliezer Yudkowsky]]
 
* [http://lesswrong.com/lw/w3/lifes_story_continues/ Life's Story Continues] by [[Eliezer Yudkowsky]]
 
* [http://lesswrong.com/lw/w3/lifes_story_continues/ Life's Story Continues] by [[Eliezer Yudkowsky]]

Revision as of 00:54, 13 April 2012

In late 2008, an extensive and long-awaited debate about the Technological Singularity occurred on Overcoming Bias, mainly between Robin Hanson and Eliezer Yudkowsky. It focused on the likelihood of hard AI takeoff ("FOOM"), the need for a theory of Friendliness, and the future of AI, brain emulations, and recursive improvement in general. This debate is often used to illustrate the difficulty of resolving disagreements, even among expert rationalists.

The posts constituting this debate have been collected here to make it easier to follow for future audiences.

Prologue

Main sequence

Conclusion

Postscript

See also