Difference between revisions of "Emulation argument for human-level AI"

From Lesswrongwiki
Jump to: navigation, search
 
Line 7: Line 7:
 
*(iv) Absent defeaters, there will be AI (before long)”
 
*(iv) Absent defeaters, there will be AI (before long)”
  
 
An emulated-brain populated world could hold severe negative consequences, such as:
 
*Inherent inability to have consciousness, if some philosophers are right <ref> LUCAS, John. (1961) Minds, machines, and Gödel, Philosophy, 36, pp. 112–127 </ref> <ref> DREYFUS, H. (1972) What Computers Can’t Do, New York: Harper & Row. </ref> <ref> PENROSE, Roger (1994) Shadows of the Mind, Oxford: Oxford University Press.</ref> <ref> BLOCK, Ned (1981) Psychologism and behaviorism, Philosophical Review, 90, pp. 5–43.</ref>.
 
*Elimination of culture in general, due to an extremely increasing penalty for inefficiency in the form of flamboyant displays <ref> BOSTROM, Nick.(2004) "The future of human evolution". Death and Anti‐Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy (Ria University Press: Palo Alto, California, 2004): pp. 339‐371. Available at: http://www.nickbostrom.com/fut/evolution.pdf</ref>
 
*Near zero costs for reproduction, pushing most of [[Economic consequences of AI and whole brain emulation|emulations to live in a subsistence state]]. <ref> HANSON, Robin. (1994) "If uploads come first: The crack of a future dawn" Extropy, 6(2). Available at: http://hanson.gmu.edu/uploads.html</ref>
 
  
 
==References==
 
==References==

Latest revision as of 01:00, 12 September 2012

The Emulation argument for human-level AI argues that since whole brain emulation seems feasible then human-level AI must also be feasible. There are many underlying assumptions in the argument, most of them are explored by Chalmers (2010)[1]. Perhaps the most debated premise is holding that a brain emulation could have a consciousness mind or that consciousness isn’t fundamental to human intelligence. Chalmers [1] formalized the argument as follows:”

  • (i) The human brain is a machine.
  • (ii) We will have the capacity to emulate this machine (before long).
  • (iii) If we emulate this machine, there will be AI.
  • (iv) Absent defeaters, there will be AI (before long)”


References

  1. 1.0 1.1 CHALMERS, David. (2010) "The Singularity: A Philosophical Analysis, Journal of Consciousness Studies", 17 (9-10), pp. 7-65.