Difference between revisions of "Carl Shulman"
From Lesswrongwiki
PeerInfinity (talk | contribs) (Created page with 'See [http://www.overcomingbias.com/author/carl-shulman Carl Schulman's user page at Overcoming Bias] {{stub}}') |
|||
Line 1: | Line 1: | ||
− | + | '''Carl Shulman''' is a researcher fellow at the Singularity Institute who has authored and co-authored several papers on AI risk, including: | |
+ | * “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” http://www.nickbostrom.com/aievolution.pdf], a analysis of the implications of [[Observer selection effect]] on [[the evolutionary argument for human level IA]] | ||
+ | *”Whole Brain Emulation and the Evolution of Superorganisms”[http://intelligence.org/files/WBE-Superorgs.pdf], argues for the existence of pressures favoring the emergence of increased coordination between [[Whole brain emulationemulated brains]], in the form of superorganisms. | ||
+ | *”Implications of a Software-Limited Singularity”[http://intelligence.org/files/SoftwareLimited.pdf], agues for the high probability of a human-level IA before 2060. | ||
− | + | Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University. | |
+ | |||
+ | ==See Also== | ||
+ | *[http://lesswrong.com/lw/7ob/timeline_of_carl_shulman_publications/ Timeline of Carl Shulman publications] | ||
+ | *[http://80000hours.org/members/carl-shulman 80,000 hours Carl Shulman’s profile] |
Revision as of 11:53, 26 September 2012
Carl Shulman is a researcher fellow at the Singularity Institute who has authored and co-authored several papers on AI risk, including:
- “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” http://www.nickbostrom.com/aievolution.pdf], a analysis of the implications of Observer selection effect on the evolutionary argument for human level IA
- ”Whole Brain Emulation and the Evolution of Superorganisms”[1], argues for the existence of pressures favoring the emergence of increased coordination between Whole brain emulationemulated brains, in the form of superorganisms.
- ”Implications of a Software-Limited Singularity”[2], agues for the high probability of a human-level IA before 2060.
Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.