Difference between revisions of "Carl Shulman"

From Lesswrongwiki
Jump to: navigation, search
m
m (fix typo)
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''Carl Shulman''' is a Research Fellow at the Singularity Institute who has authored and co-authored several papers on AI risk, including:
+
'''Carl Shulman''' is a Research Fellow at the [[Machine Intelligence Research Institute]] who has authored and co-authored several papers on AI risk, including:
 
* “How Hard is Artificial Intelligence?  Evolutionary Arguments and Selection Effects” [http://www.nickbostrom.com/aievolution.pdf], a analysis of the implications of the [[Observation selection effect]] on the [[Evolutionary argument for human-level AI]]
 
* “How Hard is Artificial Intelligence?  Evolutionary Arguments and Selection Effects” [http://www.nickbostrom.com/aievolution.pdf], a analysis of the implications of the [[Observation selection effect]] on the [[Evolutionary argument for human-level AI]]
 
*”Whole Brain Emulation and the Evolution of Superorganisms”[http://intelligence.org/files/WBE-Superorgs.pdf], argues for the existence of pressures favoring the emergence of increased coordination between [[Whole brain emulation|emulated brains]], in the form of superorganisms.
 
*”Whole Brain Emulation and the Evolution of Superorganisms”[http://intelligence.org/files/WBE-Superorgs.pdf], argues for the existence of pressures favoring the emergence of increased coordination between [[Whole brain emulation|emulated brains]], in the form of superorganisms.
*”Implications of a Software-Limited Singularity”[http://intelligence.org/files/SoftwareLimited.pdf], argues for the high probability of a human-level IA before 2060.
+
*”Implications of a Software-Limited Singularity”[http://intelligence.org/files/SoftwareLimited.pdf], argues for the high probability of a human-level AI before 2060.
  
 
Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.
 
Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.
Line 8: Line 8:
 
==See Also==  
 
==See Also==  
 
*[http://lesswrong.com/lw/7ob/timeline_of_carl_shulman_publications/ Timeline of Carl Shulman publications]
 
*[http://lesswrong.com/lw/7ob/timeline_of_carl_shulman_publications/ Timeline of Carl Shulman publications]
*[http://80000hours.org/members/carl-shulman 80,000 hours Carl Shulman’s profile]
+
*:[https://timelines.issarice.com/wiki/Timeline_of_Carl_Shulman_publications More up-to-date and comprehensive timeline of publications]
 +
*[http://80000hours.org/members/carl-shulman 80,000 Hours Carl Shulman’s profile]

Latest revision as of 06:21, 22 July 2017

Carl Shulman is a Research Fellow at the Machine Intelligence Research Institute who has authored and co-authored several papers on AI risk, including:

  • “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects” [1], a analysis of the implications of the Observation selection effect on the Evolutionary argument for human-level AI
  • ”Whole Brain Emulation and the Evolution of Superorganisms”[2], argues for the existence of pressures favoring the emergence of increased coordination between emulated brains, in the form of superorganisms.
  • ”Implications of a Software-Limited Singularity”[3], argues for the high probability of a human-level AI before 2060.

Previously, he worked at Clarium Capital Management, a global macro hedge fund, and at the law firm Reed Smith LLP. He attended New York University School of Law and holds a BA in philosophy from Harvard University.

See Also