Emulation argument for human-level AI

From Lesswrongwiki
Revision as of 01:00, 12 September 2012 by Joaolkf (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The Emulation argument for human-level AI argues that since whole brain emulation seems feasible then human-level AI must also be feasible. There are many underlying assumptions in the argument, most of them are explored by Chalmers (2010)[1]. Perhaps the most debated premise is holding that a brain emulation could have a consciousness mind or that consciousness isn’t fundamental to human intelligence. Chalmers [1] formalized the argument as follows:”

  • (i) The human brain is a machine.
  • (ii) We will have the capacity to emulate this machine (before long).
  • (iii) If we emulate this machine, there will be AI.
  • (iv) Absent defeaters, there will be AI (before long)”


  1. 1.0 1.1 CHALMERS, David. (2010) "The Singularity: A Philosophical Analysis, Journal of Consciousness Studies", 17 (9-10), pp. 7-65.