Difference between revisions of "Carl Shulman"
From Lesswrongwiki
(→See Also: https://timelines.issarice.com/wiki/Timeline_of_Carl_Shulman_publications) |
(→See Also) |
||
Line 8: | Line 8: | ||
==See Also== | ==See Also== | ||
*[http://lesswrong.com/lw/7ob/timeline_of_carl_shulman_publications/ Timeline of Carl Shulman publications] | *[http://lesswrong.com/lw/7ob/timeline_of_carl_shulman_publications/ Timeline of Carl Shulman publications] | ||
− | *:[ | + | *:[https://timelines.issarice.com/wiki/Timeline_of_Carl_Shulman_publications More up-to-date and comprehensive timeline of publications] |
*[http://80000hours.org/members/carl-shulman 80,000 Hours Carl Shulman’s profile] | *[http://80000hours.org/members/carl-shulman 80,000 Hours Carl Shulman’s profile] |
Revision as of 07:35, 8 July 2017
Carl Shulman is a Research Fellow at the Machine Intelligence Research Institute who has authored and co-authored several papers on AI risk, including:
- “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” [1], a analysis of the implications of the Observation selection effect on the Evolutionary argument for human-level AI
- ”Whole Brain Emulation and the Evolution of Superorganisms”[2], argues for the existence of pressures favoring the emergence of increased coordination between emulated brains, in the form of superorganisms.
- ”Implications of a Software-Limited Singularity”[3], argues for the high probability of a human-level IA before 2060.
Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.