Nate Soares is a research fellow of the Machine Intelligence Research Institute, where he does Friendly AI research. Nate blogs at MindingOurWay.com. He wrote the MIRI technical agenda and the MIRI research guide.
Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.
During his studies, Nate reviewed many of the textbooks on the old MIRI course list. Afterwards, he wrote a short series of posts discussing the mechanics of that highly productive study period:
- The mechanics of my recent productivity
- Habitual Productivity
- Deregulating Distraction, Moving Towards the Goal, and Level Hopping
- Dark Arts of Rationality
- On Saving the World
Other popular posts include tips for learning difficult things, and a discussion of how to care intensely about big problems in the world, even when the internal feeling of caring is weak:
At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.
- Toward Idealized Decision Thoery introduces the field of decision theory as it relates to the study of FAI.
- Questions of Reasoning Under Logical Uncertainty introduces the study of logical uncertainty as it pertains to FAI research.
- Corrigibility introduces the field of Corrigibility, aimed towards understanding methods of reasoning that avoid the manipulation and deception incentives that an intelligent agent would have by default.
- Problems of self-reference in stable, self-improving space-time embedded intelligence introduces a toy model of self-modifying agents using a suggester-verifier architecture.
- Botworld describes a cellular automaton created by Benja and Nate to provide a toy environment for studying self-modifying agents.
- "Why ain't you rich?" Why our current understanding of 'rational choice' isn't good enough for superintelligence is a recording of the talk at Purdue.