Emulation argument for human-level AI
The Emulation argument for human-level AI argues that since whole brain emulation seems feasible then human-level AI must also be feasible. There are many underlying assumptions in the argument, most of them are explored by Chalmers (2010). Perhaps the most debated premise is holding that a brain emulation could have a consciousness mind or that consciousness isn’t fundamental to human intelligence. Chalmers  formalized the argument as follows:”
- (i) The human brain is a machine.
- (ii) We will have the capacity to emulate this machine (before long).
- (iii) If we emulate this machine, there will be AI.
- (iv) Absent defeaters, there will be AI (before long)”
An emulated-brain populated world could hold severe negative consequences, such as:
- Inherent inability to have consciousness, if some philosophers are right    .
- Elimination of culture in general, due to an extremely increasing penalty for inefficiency in the form of flamboyant displays 
- Near zero costs for reproduction, pushing most of emulations to live in a subsistence state. 
- CHALMERS, David. (2010) "The Singularity: A Philosophical Analysis, Journal of Consciousness Studies", 17 (9-10), pp. 7-65.
- LUCAS, John. (1961) Minds, machines, and Gödel, Philosophy, 36, pp. 112–127
- DREYFUS, H. (1972) What Computers Can’t Do, New York: Harper & Row.
- PENROSE, Roger (1994) Shadows of the Mind, Oxford: Oxford University Press.
- BLOCK, Ned (1981) Psychologism and behaviorism, Philosophical Review, 90, pp. 5–43.
- BOSTROM, Nick.(2004) "The future of human evolution". Death and Anti‐Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy (Ria University Press: Palo Alto, California, 2004): pp. 339‐371. Available at: http://www.nickbostrom.com/fut/evolution.pdf
- HANSON, Robin. (1994) "If uploads come first: The crack of a future dawn" Extropy, 6(2). Available at: http://hanson.gmu.edu/uploads.html