Difference between revisions of "Ben Goertzel"

From Lesswrongwiki
Jump to: navigation, search
m (name change)
(Blog posts)
Line 14: Line 14:
 
*[http://lesswrong.com/lw/75a/link_ben_goertzel_does_humanity_need_an_ainanny/ Does Humanity Need an AI Nanny?]
 
*[http://lesswrong.com/lw/75a/link_ben_goertzel_does_humanity_need_an_ainanny/ Does Humanity Need an AI Nanny?]
 
*[http://lesswrong.com/lw/aw7/muehlhausergoertzel_dialogue_part_1/ Muehlhauser-Goertzel Dialogue, Part 1], [http://lesswrong.com/r/discussion/lw/c7h/muehlhausergoertzel_dialogue_part_2/ Part 2]
 
*[http://lesswrong.com/lw/aw7/muehlhausergoertzel_dialogue_part_1/ Muehlhauser-Goertzel Dialogue, Part 1], [http://lesswrong.com/r/discussion/lw/c7h/muehlhausergoertzel_dialogue_part_2/ Part 2]
 
+
*[http://jetpress.org/v25.2/goertzel.htm Superintelligence: Fears, Promises and Potentials]
  
 
==External Links==
 
==External Links==

Revision as of 16:07, 23 November 2016

Smallwikipedialogo.png
Wikipedia has an article about

Ben Goertzel is the Chairman at the AGI company Novamente, and founder of the AGI conference series.

Goertzel held the title of "Director of Research" at the Machine Intelligence Research Institute in 2008 while SI and Novamente funded the AI project OpenCog. Goertzel is currently an advisor for SI. He notably disagrees with SI's arguments regarding the need for Friendly AI.

Goertzel has responded to Eliezer Yudkowsky's Coherent Extrapolated Volition with his own variation of the idea, Coherent Aggregated Volition.


Blog posts

External Links


See also