Difference between revisions of "User:So8res"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "'''Nate Soares''' is a research fellow of the [http://intelligence.org Machine Intelligence Research Institute], where he does [http://wiki.lesswrong.com/wiki/Friendly_AI Frie...")
 
Line 1: Line 1:
 
'''Nate Soares''' is a research fellow of the [http://intelligence.org Machine Intelligence Research Institute], where he does [http://wiki.lesswrong.com/wiki/Friendly_AI Friendly AI] research.  
 
'''Nate Soares''' is a research fellow of the [http://intelligence.org Machine Intelligence Research Institute], where he does [http://wiki.lesswrong.com/wiki/Friendly_AI Friendly AI] research.  
Nate blogs at [http://mindingourway.com/ MindingOurWay.com], and wrote the [http://intelligence.org/research-guide/ MIRI research guide].
+
Nate blogs at [http://mindingourway.com/ MindingOurWay.com]. He wrote the [http://intelligence.org/files/TechnicalAgenda.pdf MIRI technical agenda] and the [http://intelligence.org/research-guide/ MIRI research guide].
  
 
Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.
 
Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.
Line 19: Line 19:
 
At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.
 
At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.
  
* [http://intelligence.org/files/CorrigibilityTR.pdf Corrigibility] introduces a subfield of FAI research aimed towards understanding methods of reasoning that avoid the manipulation and deception incentives that an intelligent agent would have by default.
+
* [http://intelligence.org/files/TowardIdealizedDecisionTheory.pdf Toward Idealized Decision Thoery] introduces the field of decision theory as it relates to the study of FAI.
 +
* [http://intelligence.org/files/QuestionsLogicalUncertainty.pdf Questions of Reasoning Under Logical Uncertainty] introduces the study of logical uncertainty as it pertains to FAI research.
 +
* [http://intelligence.org/files/Corrigibility.pdf Corrigibility] introduces the field of Corrigibility, aimed towards understanding methods of reasoning that avoid the manipulation and deception incentives that an intelligent agent would have by default.
 
* [http://intelligence.org/files/ProblemsSelfReference.pdf Problems of self-reference in stable, self-improving space-time embedded intelligence] introduces a toy model of self-modifying agents using a suggester-verifier architecture.
 
* [http://intelligence.org/files/ProblemsSelfReference.pdf Problems of self-reference in stable, self-improving space-time embedded intelligence] introduces a toy model of self-modifying agents using a suggester-verifier architecture.
 
* [http://machine-intelligence.github.io/Botworld/Botworld.pdf Botworld] describes a cellular automaton created by [http://lesswrong.com/user/Benja Benja] and Nate to provide a toy environment for studying self-modifying agents.
 
* [http://machine-intelligence.github.io/Botworld/Botworld.pdf Botworld] describes a cellular automaton created by [http://lesswrong.com/user/Benja Benja] and Nate to provide a toy environment for studying self-modifying agents.

Revision as of 08:19, 16 January 2015

Nate Soares is a research fellow of the Machine Intelligence Research Institute, where he does Friendly AI research. Nate blogs at MindingOurWay.com. He wrote the MIRI technical agenda and the MIRI research guide.

Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.

During his studies, Nate reviewed many of the textbooks on the old MIRI course list. Afterwards, he wrote a short series of posts discussing the mechanics of that highly productive study period:

Other popular posts include tips for learning difficult things, and a discussion of how to care intensely about big problems in the world, even when the internal feeling of caring is weak:

At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.

Links