User:So8res
Nate Soares is a research fellow of the Machine Intelligence Research Institute, where he does Friendly AI research. Nate blogs at MindingOurWay.com. He wrote the MIRI technical agenda and the MIRI research guide.
Nate is an ex-Google employee who encountered the idea that the development of smarter-than-human AI systems could pose an existential threat to humanity in early 2013, and decided that this idea is likely correct in mid 2013. He then spent six months studying the relevant math, attended a MIRI workshop in late 2013, and joined MIRI as a research fellow in early 2014.
During his studies, Nate reviewed many of the textbooks on the old MIRI course list. Afterwards, he wrote a short series of posts discussing the mechanics of that highly productive study period:
- The mechanics of my recent productivity
- Habitual Productivity
- Deregulating Distraction, Moving Towards the Goal, and Level Hopping
- Dark Arts of Rationality
- On Saving the World
Other popular posts include tips for learning difficult things, and a discussion of how to care intensely about big problems in the world, even when the internal feeling of caring is weak:
At MIRI, Nate has authored or co-authored a number of papers, and has given a talk on decision theory (as it relates to FAI research) at Purdue university.
- Aligning Superintelligence with Human Interests: A Technical Research Agenda gives an overview of MIRI's technical research agenda as of late 2014.
- Formalizing Two Problems of Realistic World Models discusses the problems of naturalized induction and ontology identification, two active research topics in FAI.
- Toward Idealized Decision Thoery introduces the field of decision theory as it relates to the study of FAI.
- Questions of Reasoning Under Logical Uncertainty introduces the study of logical uncertainty as it pertains to FAI research.
- Vingean Reflection: Reliable Reasoning for Self-Modifying Agents introduces the study of reliable bounded reasoning systems that reason about systems which are more intelligent than the reasoner.
- Corrigibility introduces the field of Corrigibility, aimed towards understanding methods of reasoning that avoid the manipulation and deception incentives that an intelligent agent would have by default.
- The Value Learning Problem touches upon a number of open problems relevant to value learning.
- Aligning Superintelligence with Human Interests: An Annotated Bibliography collects and organizes a number of papers relevant to MIRI's research program, and presents them sorted by topic and ordered to make it easier to get to the cutting edge in any of six active research areas.
- Problems of self-reference in stable, self-improving space-time embedded intelligence introduces a toy model of self-modifying agents using a suggester-verifier architecture.
- Botworld describes a cellular automaton created by Benja and Nate to provide a toy environment for studying self-modifying agents.
- "Why ain't you rich?" Why our current understanding of 'rational choice' isn't good enough for superintelligence is a recording of the talk at Purdue.
Links
- Nate's posts on LessWrong.
- Nate's personal website
- MindingOurWay, Nate's personal blog