Difference between revisions of "User:So8res"

From Lesswrongwiki
Jump to: navigation, search
(Created page with "'''Nate Soares''' is a research fellow of the [http://intelligence.org Machine Intelligence Research Institute], where he does [http://wiki.lesswrong.com/wiki/Friendly_AI Frie...")
 
(correct link for "on caring" - hope you don't mind me fixing your user page)
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
 
'''Nate Soares''' is a research fellow of the [http://intelligence.org Machine Intelligence Research Institute], where he does [http://wiki.lesswrong.com/wiki/Friendly_AI Friendly AI] research.  
 
'''Nate Soares''' is a research fellow of the [http://intelligence.org Machine Intelligence Research Institute], where he does [http://wiki.lesswrong.com/wiki/Friendly_AI Friendly AI] research.  
Nate blogs at [http://mindingourway.com/ MindingOurWay.com], and wrote the [http://intelligence.org/research-guide/ MIRI research guide].
+
Nate blogs at [http://mindingourway.com/ MindingOurWay.com]. He wrote the [http://intelligence.org/files/TechnicalAgenda.pdf MIRI technical agenda] and the [http://intelligence.org/research-guide/ MIRI research guide].
  
 
Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.
 
Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.
Line 15: Line 15:
  
 
* [http://lesswrong.com/lw/j10/on_learning_difficult_things/ On learning difficult things]
 
* [http://lesswrong.com/lw/j10/on_learning_difficult_things/ On learning difficult things]
* [http://lesswrong.com/lw/j10/on_learning_difficult_things/ On caring]
+
* [http://lesswrong.com/lw/l30/on_caring/ On caring]
  
 
At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.
 
At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.
  
* [http://intelligence.org/files/CorrigibilityTR.pdf Corrigibility] introduces a subfield of FAI research aimed towards understanding methods of reasoning that avoid the manipulation and deception incentives that an intelligent agent would have by default.
+
* [https://intelligence.org/files/TechnicalAgenda.pdf Aligning Superintelligence with Human Interests: A Technical Research Agenda] gives an overview of MIRI's technical research agenda as of late 2014.
 +
* [https://intelligence.org/files/RealisticWorldModels.pdf Formalizing Two Problems of Realistic World Models] discusses the problems of naturalized induction and ontology identification, two active research topics in FAI.
 +
* [http://intelligence.org/files/TowardIdealizedDecisionTheory.pdf Toward Idealized Decision Thoery] introduces the field of decision theory as it relates to the study of FAI.
 +
* [http://intelligence.org/files/QuestionsLogicalUncertainty.pdf Questions of Reasoning Under Logical Uncertainty] introduces the study of logical uncertainty as it pertains to FAI research.
 +
* [https://intelligence.org/files/VingeanReflection.pdf Vingean Reflection: Reliable Reasoning for Self-Modifying Agents] introduces the study of reliable bounded reasoning systems that reason about systems which are more intelligent than the reasoner.
 +
* [http://intelligence.org/files/Corrigibility.pdf Corrigibility] introduces the field of Corrigibility, aimed towards understanding methods of reasoning that avoid the manipulation and deception incentives that an intelligent agent would have by default.
 +
* [https://intelligence.org/files/ValueLearningProblem.pdf The Value Learning Problem] touches upon a number of open problems relevant to value learning.
 +
* [https://intelligence.org/files/AnnotatedBibliography.pdf Aligning Superintelligence with Human Interests: An Annotated Bibliography] collects and organizes a number of papers relevant to MIRI's research program, and presents them sorted by topic and ordered to make it easier to get to the cutting edge in any of six active research areas.
 
* [http://intelligence.org/files/ProblemsSelfReference.pdf Problems of self-reference in stable, self-improving space-time embedded intelligence] introduces a toy model of self-modifying agents using a suggester-verifier architecture.
 
* [http://intelligence.org/files/ProblemsSelfReference.pdf Problems of self-reference in stable, self-improving space-time embedded intelligence] introduces a toy model of self-modifying agents using a suggester-verifier architecture.
 
* [http://machine-intelligence.github.io/Botworld/Botworld.pdf Botworld] describes a cellular automaton created by [http://lesswrong.com/user/Benja Benja] and Nate to provide a toy environment for studying self-modifying agents.
 
* [http://machine-intelligence.github.io/Botworld/Botworld.pdf Botworld] describes a cellular automaton created by [http://lesswrong.com/user/Benja Benja] and Nate to provide a toy environment for studying self-modifying agents.

Latest revision as of 22:53, 19 February 2015

Nate Soares is a research fellow of the Machine Intelligence Research Institute, where he does Friendly AI research. Nate blogs at MindingOurWay.com. He wrote the MIRI technical agenda and the MIRI research guide.

Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.

During his studies, Nate reviewed many of the textbooks on the old MIRI course list. Afterwards, he wrote a short series of posts discussing the mechanics of that highly productive study period:

Other popular posts include tips for learning difficult things, and a discussion of how to care intensely about big problems in the world, even when the internal feeling of caring is weak:

At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.

Links