Carl Shulman is a researcher fellow at the Singularity Institute who has authored and co-authored several papers on AI risk, including:
- “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” http://www.nickbostrom.com/aievolution.pdf], a analysis of the implications of the Observation selection effect on the Evolutionary argument for human-level AI
- ”Whole Brain Emulation and the Evolution of Superorganisms”, argues for the existence of pressures favoring the emergence of increased coordination between emulated brains, in the form of superorganisms.
- ”Implications of a Software-Limited Singularity”, agues for the high probability of a human-level IA before 2060.
Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.