Difference between revisions of "Carl Shulman"
From Lesswrongwiki
(→See Also) |
m (fix typo) |
||
Line 2: | Line 2: | ||
* “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” [http://www.nickbostrom.com/aievolution.pdf], a analysis of the implications of the [[Observation selection effect]] on the [[Evolutionary argument for human-level AI]] | * “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” [http://www.nickbostrom.com/aievolution.pdf], a analysis of the implications of the [[Observation selection effect]] on the [[Evolutionary argument for human-level AI]] | ||
*”Whole Brain Emulation and the Evolution of Superorganisms”[http://intelligence.org/files/WBE-Superorgs.pdf], argues for the existence of pressures favoring the emergence of increased coordination between [[Whole brain emulation|emulated brains]], in the form of superorganisms. | *”Whole Brain Emulation and the Evolution of Superorganisms”[http://intelligence.org/files/WBE-Superorgs.pdf], argues for the existence of pressures favoring the emergence of increased coordination between [[Whole brain emulation|emulated brains]], in the form of superorganisms. | ||
− | *”Implications of a Software-Limited Singularity”[http://intelligence.org/files/SoftwareLimited.pdf], argues for the high probability of a human-level | + | *”Implications of a Software-Limited Singularity”[http://intelligence.org/files/SoftwareLimited.pdf], argues for the high probability of a human-level AI before 2060. |
Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University. | Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University. |
Latest revision as of 07:21, 22 July 2017
Carl Shulman is a Research Fellow at the Machine Intelligence Research Institute who has authored and co-authored several papers on AI risk, including:
- “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” [1], a analysis of the implications of the Observation selection effect on the Evolutionary argument for human-level AI
- ”Whole Brain Emulation and the Evolution of Superorganisms”[2], argues for the existence of pressures favoring the emergence of increased coordination between emulated brains, in the form of superorganisms.
- ”Implications of a Software-Limited Singularity”[3], argues for the high probability of a human-level AI before 2060.
Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.