Rationality materials

From Lesswrongwiki
Revision as of 09:47, 23 June 2012 by Vladimir Nesov (talk | contribs) (typo)
Jump to: navigation, search

To read Less Wrong systematically, read the Sequences. The Core Sequences are a collection of posts that focus on fundamental rationality skills, and they are the foundation for most of the concepts discussed on Less Wrong. The Core Sequences were written by Eliezer Yudkowsky between 2006 and 2009 on economist Robin Hanson's blog Overcoming Bias.

The best way to read through the Sequences is via the Sequence index, which lists Yudkowsky's posts in logical order.

There is also a dependency tree of all of Eliezer Yudkowsky's posts from November 2006 to December 2008 that can be used as a more thorough index of the material. In addition, Less Wrong users have created several alternative indexes of the Sequences: 1 2

How to Run a Successful Less Wrong Meetup is a booklet that can help organize better meetups, available in nicely formatted pdf.

Here is a short sample of posts from the Sequences:

Core Sequences

Map and Territory


Mysterious Answers to Mysterious Questions


A Human's Guide to Words

How To Actually Change Your Mind


Reductionism


Other Sequences

Positivism, Self Deception, and Neuroscience


Decision Theory of Newcomblike Problems


The Science of Winning at Life


Metaethics sequence


Priming and Implicit Association


Fun theory

Living Luminously


Challenging the Difficult


Decision Analysis


Rationality and Philosophy


The Craft and the Community


Quantum Physics


In addition to the Sequences, Less Wrong contributors blog about cognitive science, statistics, philosophy, and other topics related to epistemic and instrumental rationality. An archive of all of Less Wrong's articles, dating back to 2006, can be found here. Here is a sampling of content from the main blog:

Heuristics and biases


Epistemology


Identity and signalling


Scholarship and learning


Moral philosophy


Statistics

Charity


Belief calibration


Decision theory


Akrasia, motivation, and self-improvement


Existential risk and risks from artificial intelligence