Book I: Map and Territory

A: Predictably Wrong

What Do We Mean By "Rationality"?

When we talk about rationality, we're generally talking about either epistemic rationality (systematic methods of finding out the truth) or instrumental rationality (systematic methods of making the world more like we would like it to be). We can discuss these in the forms of probability theory and decision theory, but this doesn't fully cover the difficulty of being rational as a human. There is a lot more to rationality than just the formal theories.

Feeling Rational

Strong emotions can be rational. A rational belief that something good happened leads to rational happiness. But your emotions ought not to change your beliefs about events that do not depend causally on your emotions.

Why truth? And...

Truth can be instrumentally useful and intrinsically satisfying.

(alternate summary:)

Why should we seek truth? Pure curiosity is an emotion, but not therefore irrational. Instrumental value is another reason, with the advantage of giving an outside verification criterion. A third reason is conceiving of truth as a moral duty, but this might invite moralizing about "proper" modes of thinking that don't work. Still, we need to figure out how to think properly. That means avoiding biases, for which see the next post.

(alternate summary:)

You have an instrumental motive to care about the truth of your beliefs about anything you care about.

...What's a bias, again?

Biases are obstacles to truth seeking caused by one's own mental machinery.

(alternate summary:)

There are many more ways to miss than to find the truth. Finding the truth is the point of avoiding the things we call "biases", which form one of the clusters of obstacles that we find: biases are those obstacles to truth-finding that arise from the structure of the human mind, rather than from insufficient information or computing power, from brain damage, or from bad learned habits or beliefs. But ultimately, what we call a "bias" doesn't matter.

Availability

Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.

Burdensome Details

If you want to avoid the conjunction fallacy, you must try to feel a stronger emotional impact from Occam's Razor. Each additional detail added to a claim must feel as though it is driving the probability of the claim down towards zero.

Planning Fallacy

We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.

(alternate summary:)

Planning Fallacy is a tendency to overestimate your efficiency in achieving a task. The data set you consider consists of simple cached ways in which you move about accomplishing the task, and lacks the unanticipated problems and more complex ways in which the process may unfold. As a result, the model fails to adequately describe the phenomenon, and the answer gets systematically wrong.

Illusion of Transparency: Why No One Understands You

Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.

Expecting Short Inferential Distances

Humans evolved in an environment where we almost never needed to explain long inferential chains of reasoning. This fact may account for the difficulty many people have when trying to explain complicated subjects. We only explain the last step of the argument, and not every step that must be taken from our listener's premises.

The Lens That Sees Its Flaws

Part of what makes humans different from other animals is our own ability to reason about our reasoning. Mice do not think about the cognitive algorithms that generate their belief that the cat is hunting them. Our ability to think about what sort of thought processes would lead to correct beliefs is what gave rise to Science. This ability makes our admittedly flawed minds much more powerful.

B: Fake Beliefs

Making Beliefs Pay Rent (in Anticipated Experiences)

Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian," this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" instead of "What statements do I believe?"

A Fable of Science and Politics

People respond in different ways to clear evidence they're wrong, not always by updating and moving on.

(alternate summary:)

A story about an underground society divided into two factions: one that believes that the sky is blue and one that believes the sky is green. At the end of the story, the reactions of various citizens to discovering the outside world and finally seeing the color of the sky are described.

Belief in Belief

Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.

Bayesian Judo

You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof.

Pretending to be Wise

Trying to signal wisdom or maturity by taking a neutral position is very seldom the right course of action.

Religion's Claim to be Non-Disprovable

Religions used to claim authority in all domains, including biology, cosmology, and history. Only recently have religions attempted to be non-disprovable by confining themselves to ethical claims. But the ethical claims in scripture ought to be even more obviously wrong than the other claims, making the idea of non-overlapping magisteria a Big Lie.

Professing and Cheering

A woman on a panel enthusiastically declared her belief in a pagan creation myth, flaunting its most outrageously improbable elements. This seemed weirder than "belief in belief" (she didn't act like she needed validation) or "religious profession" (she didn't try to act like she took her religion seriously). So, what was she doing? She was cheering for paganism — cheering loudly by making ridiculous claims.

Belief as Attire

When you've stopped anticipating-as-if something is true, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.

Applause Lights

Words like "democracy" or "freedom" are applause lights - no one disapproves of them, so they can be used to signal conformity and hand-wave away difficult problems. If you hear people talking about the importance of "balancing risks and opportunities" or of solving problems "through a collaborative process" that aren't followed up by any specifics, then the words are applause lights, not real thoughts.

C: Noticing Confusion

Focus Your Uncertainty

If you are paid for post-hoc analysis, you might like theories that "explain" all possible outcomes equally well, without focusing uncertainty. But what if you don't know the outcome yet, and you need to have an explanation ready in 100 minutes? Then you want to spend most of your time on excuses for the outcomes that you anticipate most, so you still need a theory that focuses your uncertainty.

What is Evidence?

Evidence is an event connected by a chain of causes and effects to whatever it is you want to learn about. It also has to be an event that is more likely if reality is one way, than if reality is another. If a belief is not formed this way, it cannot be trusted.

For good social reasons, we require legal and scientific evidence to be more than just rational evidence. Hearsay is rational evidence, but as legal evidence it would invite abuse. Scientific evidence must be public and reproducible by everyone, because we want a pool of especially reliable beliefs. Thus, Science is about reproducible conditions, not the history of any one experiment.

How Much Evidence Does It Take?

If you are considering one hypothesis out of many, or that hypothesis is more implausible than others, or you wish to know with greater confidence, you will need more evidence. Ignoring this rule will cause you to jump to a belief without enough evidence, and thus be wrong.

Einstein's Arrogance

Albert Einstein, when asked what he would do if an experiment disproved his theory of general relativity, responded with "I would feel sorry for [the experimenter]. The theory is correct." While this may sound like arrogance, Einstein doesn't look nearly as bad from a Bayesian perspective. In order to even consider the hypothesis of general relativity in the first place, he would have needed a large amount of Bayesian evidence.

Occam's Razor

To a human, Thor feels like a simpler explanation for lightning than Maxwell's equations, but that is because we don't see the full complexity of an intelligent mind. However, if you try to write a computer program to simulate Thor and a computer program to simulate Maxwell's equations, one will be much easier to accomplish. This is how the complexity of a hypothesis is measured in the formalisms of Occam's Razor.

Your Strength as a Rationalist

A hypothesis that forbids nothing permits everything, and thus fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

Absence of Evidence Is Evidence of Absence

Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of an observation may be strong evidence or very weak evidence of absence, but it is always evidence.

Conservation of Expected Evidence

If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory.

Hindsight Devalues Science

Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.

D: Mysterious Answers

Fake Explanations

People think that fake explanations use words like "magic," while real explanations use scientific words like "heat conduction." But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.

Guessing the Teacher's Password

In schools, "education" often consists of having students memorize answers to specific questions (i.e., the "teacher's password"), rather than learning a predictive model that says what is and isn't likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don't do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result. SilasBarta 00:54, 13 April 2011 (UTC)

Science as Attire

You don't understand the phrase "because of evolution" unless it constrains your anticipations. Otherwise, you are using it as attire to identify yourself with the "scientific" tribe. Similarly, it isn't scientific to reject strongly superhuman AI only because it sounds like science fiction. A scientific rejection would require a theoretical model that bounds possible intelligences. If your proud beliefs don't constrain anticipation, they are probably just passwords or attire.

Fake Causality

It is very easy for a human being to think that a theory predicts a phenomenon, when in fact is was fitted to a phenomenon. Properly designed reasoning systems (GAIs) would be able to avoid this mistake with our knowledge of probability theory, but humans have to write down a prediction in advance in order to ensure that our reasoning about causality is correct.

Semantic Stopsigns

There are certain words and phrases that act as "stopsigns" to thinking. They aren't actually explanations, or help to resolve the actual issue at hand, but they act as a marker saying "don't ask any questions."

Mysterious Answers to Mysterious Questions

The theory of vitalism was developed before the idea of biochemistry. It stated that the mysterious properties of living matter, compared to nonliving matter, was due to an "elan vital". This explanation acts as a curiosity-stopper, and leaves the phenomenon just as mysterious and inexplicable as it was before the answer was given. It feels like an explanation, though it fails to constrain anticipation.

The Futility of Emergence

The theory of "emergence" has become very popular, but is just a mysterious answer to a mysterious question. After learning that a property is emergent, you aren't able to make any new predictions.

Say Not "Complexity"

The concept of complexity isn't meaningless, but too often people assume that adding complexity to a system they don't understand will improve it. If you don't know how to solve a problem, adding complexity won't help; better to say "I have no idea" than to say "complexity" and think you've reached an answer.

Positive Bias: Look Into the Dark

Positive bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.

Lawful Uncertainty

Facing a random scenario, the correct solution is really not to behave randomly. Faced with an irrational universe, throwing away your rationality won't help.

My Wild and Reckless Youth

Traditional rationality (without Bayes' Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.

Failing to Learn from History

There are no inherently mysterious phenomena, but every phenomenon seems mysterious, right up until the moment that science explains it. It seems to us now that biology, chemistry, and astronomy are naturally the realm of science, but if we had lived through their discoveries, and watched them reduced from mysterious to mundane, we would be more reluctant to believe the next phenomenon is inherently mysterious.

Making History Available

It's easy not to take the lessons of history seriously; our brains aren't well-equipped to translate dry facts into experiences. But imagine living through the whole of human history - imagine watching mysteries be explained, watching civilizations rise and fall, being surprised over and over again - and you'll be less shocked by the strangeness of the next era.

Explain/Worship/Ignore?

When you encounter something you don't understand, you have three options: to seek an explanation, knowing that that explanation will itself require an explanation; to avoid thinking about the mystery at all; or to embrace the mysteriousness of the world and worship your confusion.

"Science" as Curiosity-Stopper

Although science does have explanations for phenomena, it is not enough to simply say that "Science!" is responsible for how something works -- nor is it enough to appeal to something more specific like "electricity" or "conduction". Yet for many people, simply noting that "Science has an answer" is enough to make them no longer curious about how it works. In that respect, "Science" is no different from more blatant curiosity-stoppers like "God did it!" But you shouldn't let your interest die simply because someone else knows the answer (which is a rather strange heuristic anyway): You should only be satisfied with a predictive model, and how a given phenomenon fits into that model. SilasBarta 01:22, 13 April 2011 (UTC)

Truly Part Of You

Any time you believe you've learned something, you should ask yourself, "Could I re-generate this knowledge if it were somehow deleted from my mind, and how would I do so?" If the supposed knowledge is just empty buzzwords, you will recognize that you can't, and therefore that you haven't learned anything. But if it's an actual model of reality, this method will reinforce how the knowledge is entangled with the rest of the world, enabling you to apply it to other domains, and know when you need to update those beliefs. It will have become "truly part of you", growing and changing with the rest of your knowledge.

Book II: How to Actually Change Your Mind

E: Overly Convenient Excuses

The Proper Use of Humility

Use humility to justify further action, not as an excuse for laziness and ignorance.

(alternate summary:)

There are good and bad kinds of humility. Proper humility is not being selectively underconfident about uncomfortable truths. Proper humility is not the same as social modesty, which can be an excuse for not even trying to be right. Proper scientific humility means not just acknowledging one's uncertainty with words, but taking specific actions to plan for the case that one is wrong.

The Third Alternative

People justify Noble Lies by pointing out their benefits over doing nothing. But, if you really need these benefits, you can construct a Third Alternative for getting them. How? You have to search for one. Beware the temptation not to search or to search perfunctorily. Ask yourself, "Did I spend five minutes by the clock trying hard to think of a better alternative?"

Lotteries: A Waste of Hope

Some defend lottery-ticket buying as a rational purchase of fantasy. But you are occupying your valuable brain with a fantasy whose probability is nearly zero, wasting emotional energy. Without the lottery, people might fantasize about things that they can actually do, which might lead to thinking of ways to make the fantasy a reality. To work around a bias, you must first notice it, analyze it, and decide that it is bad. Lottery advocates are failing to complete the third step.

New Improved Lottery

If the opportunity to fantasize about winning justified the lottery, then a "new improved" lottery would be even better. You would buy a nearly-zero chance to become a millionaire at any moment over the next five years. You could spend every moment imagining that you might become a millionaire at that moment.

But There's Still A Chance, Right?

Sometimes, you calculate the probability of a certain event and find that the number is so unbelievably small that your brain really can't keep track of how small it is, any more than you can spot an individual grain of sand on a beach from 100 meters off. But, because you're already thinking about that event enough to calculate the probability of it, it feels like it's still worth keeping track of. It's not.

The Fallacy of Gray

Nothing is perfectly black or white. Everything is gray. However, this does not mean that everything is the same shade of gray. It may be impossible to completely eliminate bias, but it is still worth reducing bias.

Absolute Authority

Those without the understanding of the Quantitative Way will often map the process of arriving at beliefs onto the social domains of Authority. They think that if Science is not infinitely certain, or if it has ever admitted a mistake, then it is no longer a trustworthy source, and can be ignored. This cultural gap is rather difficult to cross.

How to Convince Me That 2 + 2 = 3

The way to convince Eliezer that 2+2=3 is the same way to convince him of any proposition, give him enough evidence. If all available evidence, social, mental and physical, starts indicating that 2+2=3 then you will shortly convince Eliezer that 2+2=3 and that something is wrong with his past or recollection of the past.

Infinite Certainty

If you say you are 99.9999% confident of a proposition, you're saying that you could make one million equally likely statements and be wrong, on average, once. Probability 1 indicates a state of infinite certainty. Furthermore, once you assign a probability 1 to a proposition, Bayes' theorem says that it can never be changed, in response to any evidence. Probability 1 is a lot harder to get to with a human brain than you would think.

0 And 1 Are Not Probabilities

In the ordinary way of writing probabilities, 0 and 1 both seem like entirely reachable quantities. But when you transform probabilities into odds ratios, or log-odds, you realize that in order to get a proposition to probability 1 would require an infinite amount of evidence.

Your Rationality is My Business

As a human, I have a proper interest in the future of human civilization, including the human pursuit of truth. That makes your rationality my business. The danger is that we will think that we can respond to irrationality with violence. Relativism is not the way to avoid this danger. Instead, commit to using only arguments and evidence, never violence, against irrational thinking.

F: Politics and Rationality

Politics is the Mind-Killer

In your discussions, beware, for people have great difficulty being rational about current political issues. This is no surprise to someone familiar with evolutionary psychology.

(alternate summary:)

People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, food, or let you kill your hated rival. If you must talk about politics (for the purpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it, but don't blame it explicitly on the whole Republican/Democratic/Liberal/Conservative/Nationalist Party.

Policy Debates Should Not Appear One-Sided

Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.

(alternate summary:)

Robin Hanson proposed a "banned products shop" where things that the government ordinarily would ban are sold. Eliezer responded that this would probably cause at least one stupid and innocent person to die. He became surprised when people inferred from this remark that he was against Robin's idea. Policy questions are complex actions with many consequences. Thus they should only rarely appear one-sided to an objective observer. A person's intelligence is largely a product of circumstances they cannot control. Eliezer argues for cost-benefit analysis instead of traditional libertarian ideas of tough-mindedness (people who do stupid things deserve their consequences).

The Scales of Justice, the Notebook of Rationality

People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.

(alternate summary:)

In non-binary answer spaces, you can't add up pro and con arguments along one dimension without risk of getting important factual questions wrong.

Correspondence Bias

Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

(alternate summary:)

Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way. The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior.

Are Your Enemies Innately Evil?

People want to think that the Enemy is an innately evil mutant. But, usually, the Enemy is acting as you might in their circumstances. They think that they are the hero in their story and that their motives are just. That doesn't mean that they are right. Killing them may be the best option available. But it is still a tragedy.

Reversed Stupidity Is Not Intelligence

The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Stalin also believed that 2 + 2 = 4. Stupidity or human evil do not anticorrelate with truth. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.

Argument Screens Off Authority

There are many cases in which we should take the authority of experts into account, when we decide whether or not to believe their claims. But, if there are technical arguments that are available, these can screen off the authority of experts.

Hug the Query

The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

Rationality and the English Language

George Orwell's writings on language and totalitarianism are critical to understanding rationality. Orwell was an opponent of the use of words to obscure meaning, or to convey ideas without their emotional impact. Language should get the point across - when the effort to convey information gets lost in the effort to sound authoritative, you are acting irrationally.

Human Evil and Muddled Thinking

It's easy to think that rationality and seeking truth is an intellectual exercise, but this ignores the lessons of history. Cognitive biases and muddled thinking allow people to hide from their own mistakes and allow evil to take root. Spreading the truth makes a real difference in defeating evil.

G: Against Rationalization

Knowing About Biases Can Hurt People

Learning common biases won't help you obtain truth if you only use this knowledge to attack beliefs you don't like. Discussions about biases need to first do no harm by emphasizing motivated cognition, the sophistication effect, and dysrationalia, although even knowledge of these can backfire.

Update Yourself Incrementally

Many people think that you must abandon a belief if you admit any counterevidence. Instead, change your belief by small increments. Acknowledge small pieces of counterevidence by shifting your belief down a little. Supporting evidence will follow if your belief is true. "Won't you lose debates if you concede any counterarguments?" Rationality is not for winning debates; it is for deciding which side to join.

One Argument Against An Army

It is tempting to weigh each counterargument by itself against all supporting arguments. No single counterargument can overwhelm all the supporting arguments, so you easily conclude that your theory was right. Indeed, as you win this kind of battle over and over again, you feel ever more confident in your theory. But, in fact, you are just rehearsing already-known evidence in favor of your view.

The Bottom Line

If you first write at the bottom of a sheet of paper, “And therefore, the sky is green!”, it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong.

What Evidence Filtered Evidence?

Someone tells you only the evidence that they want you to hear. Are you helpless? Forced to update your beliefs until you reach their position? No, you also have to take into account what they could have told you but didn't.

Rationalization

Rationality works forward from evidence to conclusions. Rationalization tries in vain to work backward from favourable conclusions to the evidence. But you cannot rationalize what is not already rational. It is as if "lying" were called "truthization".

A Rational Argument

You can't produce a rational argument for something that isn't rational. First select the rational choice. Then the rational argument is just a list of the same evidence that convinced you.

Avoiding Your Belief's Real Weak Points

When people doubt, they instinctively ask only the questions that have easy answers. When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most.

Motivated Stopping and Motivated Continuation

When the evidence we've seen points towards a conclusion that we like or dislike, there is a temptation to stop the search for evidence prematurely, or to insist that more evidence is needed.

Fake Justification

We should be suspicious of our tendency to justify our decisions with arguments that did not actually factor into making said decisions. Whatever process you actually use to make your decisions is what determines your effectiveness as a rationalist.

Is That Your True Rejection?

People's stated reason for a rejection may not be the same as the actual reason for that rejection.

Entangled Truths, Contagious Lies

Of Lies and Black Swan Blowups

Dark Side Epistemology

If you want to tell a truly convincing lie, to someone who knows what they're talking about, you either have to lie about lots of specific object level facts, or about more general laws, or about the laws of thought. Lots of the memes out there about how you learn things originally came from people who were trying to convince other people to believe false statements.

H: Against Doublethink

Singlethink

The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking.

Doublethink (Choosing to be Biased)

George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.

No, Really, I've Deceived Myself

Some people who have fallen into self-deception haven't actually deceived themselves. Some of them simply believe that they have deceived themselves, but have not actually done this.

Belief in Self-Deception

Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.

Moore's Paradox

People often mistake reasons for endorsing a proposition for reasons to believe that proposition.

Don't Believe You'll Self-Deceive

It may be wise to tell yourself that you will not be able to successfully deceive yourself, because by telling yourself this, you may make it true.

I: Seeing with Fresh Eyes

Anchoring and Adjustment

Exposure to numbers affects guesses on estimation problems by anchoring your mind to an given estimate, even if it's wildly off base. Be aware of the effect random numbers have on your estimation ability.

Priming and Contamination

Even slight exposure to a stimulus is enough to change the outcome of a decision or estimate. See also Never Leave Your Room by Yvain, and Cached Selves by Salamon and Rayhawk.

(alternate summary:)

Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with.

Do We Believe Everything We're Told?

Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.

Cached Thoughts

Brains are slow. They need to cache as much as they can. They store answers to questions, so that no new thought is required to answer. Answers copied from others can end up in your head without you ever examining them closely. This makes you say things that you'd never believe if you thought them through. So examine your cached thoughts! Are they true?

The "Outside the Box" Box

When asked to think creatively there's always a cached thought that you can fall into. To be truly creative you must avoid the cached thought. Think something actually new, not something that you heard was the latest innovation. Striving for novelty for novelty's sake is futile, instead you must aim to be optimal. People who strive to discover truth or to invent good designs, may in the course of time attain creativity.

Original Seeing

One way to fight cached patterns of thought is to focus on precise concepts.

Stranger Than History

Imagine trying to explain quantum physics, the internet, or any other aspect of modern society to people from 1900. Technology and culture change so quickly that our civilization would be unrecognizable to people 100 years ago; what will the world look like 100 years from now?

The Logical Fallacy of Generalization from Fictional Evidence

The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.

The Virtue of Narrowness

One way to fight cached patterns of thought is to focus on precise concepts.

(alternate summary:)

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.

How to Seem (and Be) Deep

Just find ways of violating cached expectations.

(alternate summary)

To seem deep, find coherent but unusual beliefs, and concentrate on explaining them well. To be deep, you actually have to think for yourself.

We Change Our Minds Less Often Than We Think

We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.

Hold Off On Proposing Solutions

Proposing solutions prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.

The Genetic Fallacy

The genetic fallacy seems like a strange kind of fallacy. The problem is that the original justification for a belief does not always equal the sum of all the evidence that we currently have available. But, on the other hand, it is very easy for people to still believe untruths from a source that they have since rejected.

J: Death Spirals

The Affect Heuristic

Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.

Evaluability (And Cheap Holiday Shopping)

It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.

(alternate summary:)

Is there a way to exploit human biases to give the impression of largess with cheap gifts? Yes. Humans compare the value/price of an object to other similar objects. A $399 Eee PC is cheap (because other laptops are more expensive), yet a $399 PS3 is expensive (because the alternatives are less expensive). To give the impression of expense in a gift chose a cheap class of item (say, a candle) and buy the most expensive one around.

Unbounded Scales, Huge Jury Awards, & Futurism

Without a metric for comparison, estimates of, e.g., what sorts of punitive damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.

The Halo Effect

Positive qualities seem to correlate with each other, whether or not they actually do.

Superhero Bias

It is better to risk your life to save 200 people than to save 3. But someone who risks their life to save 3 people is revealing a more altruistic nature than someone risking their life to save 200. And yet comic books are written about heroes who save 200 innocent schoolchildren, and not police officers saving three prostitutes.

Mere Messiahs

John Perry, an extropian and a transhumanist, died when the north tower of the World Trade Center fell. He knew he was risking his existence to save other people, and he had hope that he might be able to avoid death, but he still helped them. This takes far more courage than someone who dies, expecting to be rewarded in an afterlife for their virtue.

Affective Death Spirals

Human beings can fall into a feedback loop around something that they hold dear. Every situation they consider, they use their great idea to explain. Because their great idea explained this situation, it now gains weight. Therefore, they should use it to explain more situations. This loop can continue, until they believe Belgium controls the US banking system, or that they can use an invisible blue spirit force to locate parking spots.

Resist the Happy Death Spiral

You can avoid a Happy Death Spiral by (1) splitting the Great Idea into parts (2) treating every additional detail as burdensome (3) thinking about the specifics of the causal chain instead of the good or bad feelings (4) not rehearsing evidence (5) not adding happiness from claims that "you can't prove are wrong"; but not by (6) refusing to admire anything too much (7) conducting a biased search for negative points until you feel unhappy again (8) forcibly shoving an idea into a safe box.

Uncritical Supercriticality

One of the most dangerous mistakes that a human being with human psychology can make, is to begin thinking that any argument against their favorite idea must be wrong, because it is against their favorite idea. Alternatively, they could think that any argument that supports their favorite idea must be right. This failure of reasoning has led to massive amounts of suffering and death in world history.

Evaporative Cooling of Group Beliefs

When a cult encounters a blow to their own beliefs (a prediction fails to come true, their leader is caught in a scandal, etc) the cult will often become more fanatical. In the immediate aftermath, the cult members that leave will be the ones who were previously the voice of opposition, skepticism, and moderation. Without those members, the cult will slide further in the direction of fanaticism.

When None Dare Urge Restraint

The dark mirror to the happy death spiral is the spiral of hate. When everyone looks good for attacking someone, and anyone who disagrees with any attack must be a sympathizer to the enemy, the results are usually awful. It is too dangerous for there to be anyone in the world that we would prefer to say negative things about, over saying accurate things about.

The Robbers Cave Experiment

The Robbers Cave Experiment, by Sherif, Harvey, White, Hood, and Sherif (1954/1961), was designed to investigate the causes and remedies of problems between groups. Twenty-two middle school aged boys were divided into two groups and placed in a summer camp. From the first time the groups learned of each other's existence, a brutal rivalry was started. The only way the counselors managed to bring the groups together was by giving the two groups a common enemy. Any resemblance to modern politics is just your imagination.

Every Cause Wants To Be A Cult

Simply having a good idea at the center of a group of people is not enough to prevent that group from becoming a cult. As long as the idea's adherents are human, they will be vulnerable to the flaws in reasoning that cause cults. Simply basing a group around the idea of being rational is not enough. You have to actually put in the work to oppose the slide into cultishness.

Guardians of the Truth

There is an enormous psychological difference between believing that you absolutely, certainly, have the truth, versus trying to discover the truth. If you believe that you have the truth, and that it must be protected from heretics, torture and murder follow. Alternatively, if you believe that you are close to the truth, but perhaps not there yet, someone who disagrees with you is simply wrong, not a mortal enemy.

Guardians of the Gene Pool

It is a common misconception that the Nazis wanted their eugenics program to create a new breed of supermen. In fact, they wanted to breed back to the archetypal Nordic man. They located their ideals in the past, which is a counterintuitive idea for many of us.

Guardians of Ayn Rand

Ayn Rand, the leader of the Objectivists, praised reason and rationality. The group she created became a cult. Praising rationality does not provide immunity to the human trend towards cultishness.

Two Cult Koans

Two Koans about individuals concerned that they may have joined a cult.

Asch's Conformity Experiment

The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.

On Expressing Your Concerns

A way of breaking the conformity effect in some cases.

Lonely Dissent

Joining a revolution does take courage, but it is something that humans can reliably do. It is comparatively more difficult to risk death. But is is more difficult than either of these to be the first person in a rebellion. To be the only one who is saying something different. That doesn't feel like going to school in black. It feels like going to school in a clown suit.

Cultish Countercultishness

People often nervously ask, "This isn't a cult, is it?" when encountering a group that thinks something weird. There are many reasons why this question doesn't make sense. For one thing, if you really were a member of a cult, you would not say so. Instead, what you should do when considering whether or not to join a group, is consider the details of the group itself. Is their reasoning sound? Do they do awful things to their members?

K: Letting Go

The Importance of Saying "Oops"

When your theory is proved wrong, just scream "OOPS!" and admit your mistake fully. Don't just admit local errors. Don't try to protect your pride by conceding the absolute minimal patch of ground. Making small concessions means that you will make only small improvements. It is far better to make big improvements quickly. This is a lesson of Bayescraft that Traditional Rationality fails to teach.

The Crackpot Offer

If you make a mistake, don't excuse it or pat yourself on the back for thinking originally; acknowledge you made a mistake and move on. If you become invested in your own mistakes, you'll stay stuck on bad ideas.

Just Lose Hope Already

Admit it when the evidence goes against you, or else things can get a whole lot worse.

(alternate summary:)

Casey Serin owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states. The sad part is that he hasn't given up - he hasn't declared bankruptcy, and has just attempted to purchase another house. While this behavior seems merely stupid, it brings to mind Merton and Scholes of Long-Term Capital Management, who made 40% profits for three years, and then lost it all when they overleveraged. Each profession has rules on how to be successful, which makes rationality seem unlikely to help greatly in life. Yet it seems that one of the greater skills is not being stupid, which rationality does help with.

The Proper Use of Doubt

Doubt is often regarded as virtuous for the wrong reason: because it is a sign of humility and recognition of your place in the hierarchy. But from a rationalist perspective, this is not why you should doubt. The doubt, rather, should exist to annihilate itself: to confirm the reason for doubting, or to show the doubt to be baseless. When you can no longer make progress in this respect, the doubt is no longer useful to you as a rationalist.

You Can Face Reality

This post quotes a poem by Eugene Gendlin, which reads, "What is true is already so. / Owning up to it doesn't make it worse. / Not being open about it doesn't make it go away. / And because it's true, it is what is there to be interacted with. / Anything untrue isn't there to be lived. / People can stand what is true, / for they are already enduring it."

The Meditation on Curiosity

If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.

No One Can Exempt You From Rationality's Laws

Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating - as defections from cooperative norms. But viewing rationality as a social obligation gives rise to some strange ideas. The laws of rationality are mathematics, and no social maneuvering can exempt you.

Leave a Line of Retreat

If you are trying to judge whether some unpleasant idea is true you should visualise what the world would look like if it were true, and what you would do in that situation. This will allow you to be less scared of the idea, and reason about it without immediately trying to reject it.

Crisis of Faith

The Ritual

Depiction of crisis of faith in Beisutsukai world.

(alternate summary:)

Jeffreyssai carefully undergoes a crisis of faith.

Book III: The Machine in the Ghost

L: The Simple Math of Evolution

An Alien God

Evolution is awesomely powerful, unbelievably stupid, incredibly slow, monomaniacally singleminded, irrevocably splintered in focus, blindly shortsighted, and itself a completely accidental process. If evolution were a god, it would not be Jehovah, but H. P. Lovecraft's Azathoth, the blind idiot god burbling chaotically at the center of everything.

The Wonder of Evolution

...is not how amazingly well it works, but that it works at all without a mind, brain, or the ability to think abstractly - that an entirely accidental process can produce complex designs. If you talk about how amazingly well evolution works, you're missing the point.

(alternate summary:)

The wonder of the first replicator was not how amazingly well it replicated, but that a first replicator could arise, at all, by pure accident, in the primordial seas of Earth. That first replicator would undoubtedly be devoured in an instant by a sophisticated modern bacterium. Likewise, the wonder of evolution itself is not how well it works, but that a brainless, accidentally occurring optimization process can work at all. If you praise evolution for being such a wonderfully intelligent Creator, you're entirely missing the wonderful thing about it.

Evolutions Are Stupid (But Work Anyway)

Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.

(alternate summary:)

Modern evolutionary theory gives us a definite picture of evolution's capabilities. If you praise evolution one millimeter higher than this, you are not scoring points against creationists, you are just being factually inaccurate. In particular we can calculate the probability and time for advantageous genes to rise to fixation. For example, a mutation conferring a 3% advantage would have only a 6% probability of surviving, and if it did so, would take 875 generations to rise to fixation in a population of 500,000 (on average).

No Evolutions for Corporations or Nanodevices

Price's Equation describes quantitatively how the change in a average trait, in each generation, is equal to the covariance between that trait and fitness. Such covariance requires substantial variation in traits, substantial variation in fitness, and substantial correlation between the two - and then, to get large cumulative selection pressures, the correlation must have persisted over many generations with high-fidelity inheritance, continuing sources of new variation, and frequent birth of a significant fraction of the population. People think of "evolution" as something that automatically gets invoked where "reproduction" exists, but these other conditions may not be fulfilled - which is why corporations haven't evolved, and nanodevices probably won't.

Evolving to Extinction

Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool. It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to evolve to extinction.

(alternate summary:)

On how evolution could be responsible for the bystander effect.

(alternate summary:)

It is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.

The Tragedy of Group Selectionism

A tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.

(alternate summary:)

Describes a key case where some pre-1960s evolutionary biologists went wrong by anthropomorphizing evolution - in particular, Wynne-Edwards, Allee, and Brereton among others believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat. Since evolution does not usually do this sort of thing, their rationale was group selection - populations that did this would survive better. But group selection is extremely difficult to make work mathematically, and an experiment under sufficiently extreme conditions to permit group selection, had rather different results.

Fake Optimization Criteria

Why study evolution? For one thing - it lets us see an alien optimization process up close - lets us see the real consequence of optimizing strictly for an alien optimization criterion like inclusive genetic fitness. Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to restrain their breeding to live in harmony with prey; the true result is something that humans find less aesthetic.

Adaptation-Executers, not Fitness-Maximizers

A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.

Evolutionary Psychology

The human brain, and every ability for thought and emotion in it, are all adaptations selected for by evolution. Humans have the ability to feel angry for the same reason that birds have wings: ancient humans and birds with those adaptations had more kids. But, it is easy to forget that there is a distinction between the reason humans have the ability to feel anger, and the reason why a particular person was angry at a particular thing. Human brains are adaptation executors, not fitness maximizers.

An Especially Elegant Evpsych Experiment

An experiment comparing expected parental grief at the death of a child at different ages, to the reproductive success rate of children at that age in a hunter gatherer tribe.

Superstimuli and the Collapse of Western Civilization

As a side effect of evolution, superstimuli exist, and, as a result of economics, they are getting and will likely continue to get worse.

(alternate summary:)

At least 3 people have died by playing online games non-stop. How is it that a game is so enticing that after 57 straight hours playing, a person would rather spend the next hour playing the game over sleeping or eating? A candy bar is superstimulus, it corresponds overwhelmingly well to the EEA healthy food characteristics of sugar and fat. If people enjoy these things, the market will respond to provide as much of it as possible, even if other considerations make it undesirable.

Thou Art Godshatter

Describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.

(alternate summary:)

Being a thousand shards of desire isn't always fun, but at least it's not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge - tastes that judge the blind idiot god's monomaniacal focus, and find it aesthetically unsatisfying.

M: Fragile Purposes

Belief in Intelligence

What does a belief that an agent is intelligent look like? What predictions does it make?

Humans in Funny Suits

It's really hard to imagine aliens that are fundamentally different from human beings.

Optimization and the Singularity

An introduction to optimization processes and why Yudkowsky thinks that a singularity would be far more powerful than calculations based on human progress would suggest.

Ghosts in the Machine

There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.

Artificial Addition

If you imagine a world where people are stuck on the "artifical addition" (i.e. machine calculator) problem, the way people currently are stuck on artificial intelligence, and you saw them trying the same popular approaches taken today toward AI, it would become clear how silly they are. Contrary to popular wisdom (in that world or ours), the solution is not to "evolve" an artificial adder, or invoke the need for special physics, or build a huge database of solutions, etc. -- because all of these methods dodge the crucial task of understanding what addition involves, and instead try to dance around it. Moreover, the history of AI research shows the problems of believing assertions one cannot re-generate from one's own knowledge.

Terminal Values and Instrumental Values

Proposes a formalism for a discussion of the relationship between terminal and instrumental values. Terminal values are world states that we assign some sort of positive or negative worth to. Instrumental values are links in a chain of events that lead to desired world states.

Leaky Generalizations

The words and statements that we use are inherently "leaky", they do not precisely convey absolute and perfect information. Most humans have ten fingers, but if you know that someone is a human, you cannot confirm (with probability 1) that they have ten fingers. The same holds with planning and ethical advice.

The Hidden Complexity of Wishes

There are a lot of things that humans care about. Therefore, the wishes that we make (as if to a genie) are enormously more complicated than we would intuitively suspect. In order to safely ask a powerful, intelligent being to do something for you, that being must share your entire decision criterion, or else the outcome will likely be horrible.

Anthropomorphic Optimism

Don't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.

Lost Purposes

On noticing when you're still doing something that has become disconnected from its original purpose.

(alternate summary)

It is possible for the various steps in a complex plan to become valued in and of themselves, rather than as steps to achieve some desired goal. It is especially easy if the plan is being executed by a complex organization, where each group or individual in the organization is only evaluated by whether or not they carry out their assigned step. When this process is carried to its extreme, we get Soviet shoe factories manufacturing tiny shoes to increase their production quotas, and the No Child Left Behind Act.

N: A Human's Guide to Words

The Parable of the Dagger

A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?

The Parable of Hemlock

Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?

(alternate summary:)

Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?

You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth.

You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal.

Words as Hidden Inferences

The mere presence of words can influence thinking, sometimes misleading it.

(alternate summary:)

The mere presence of words can influence thinking, sometimes misleading it.

The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."

Extensions and Intensions

You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example.

(alternate summary:)

You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example. "What is red?" "Red is a color." "What's a color?" "It's a property of a thing?" "What's a thing? What's a property?" It never occurs to you to point to a stop sign and an apple.

The extension doesn't match the intension. We aren't consciously aware of our identification of a red light in the sky as "Mars", which will probably happen regardless of your attempt to define "Mars" as "The God of War".

Similarity Clusters

Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does.

(alternate summary:)

Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does. When the philosophers of Plato's Academy claimed that the best definition of a human was a "featherless biped", Diogenes the Cynic is said to have exhibited a plucked chicken and declared "Here is Plato's Man." The Platonists promptly changed their definition to "a featherless biped with broad nails".

Typicality and Asymmetrical Similarity

You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters.

(alternate summary:)

You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters. Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a disease was more likely to spread from robins to ducks on an island, than from ducks to robins.

The Cluster Structure of Thingspace

A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions.

(alternate summary:)

A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you'll get enough information that the occasional nine-fingered human won't fool you.

Disguised Queries

You ask whether something "is" or "is not" a category member but can't name the question you really want answered.

(alternate summary:)

You ask whether something "is" or "is not" a category member but can't name the question you really want answered. What is a "man"? Is Barney the Baby Boy a "man"? The "correct" answer may depend considerably on whether the query you really want answered is "Would hemlock be a good thing to feed Barney?" or "Will Barney make a good husband?"

Neural Categories

You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them.

(alternate summary:)

You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them. It's much easier for a human to notice whether an object is a "blegg" or "rube"; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs. Other statistical algorithms work differently.

How An Algorithm Feels From Inside

You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain.

(alternate summary:)

You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept".

You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.

(see also the wiki page)

Disputing Definitions

An example of how the technique helps.

(alternate summary:)

You allow an argument to slide into being about definitions, even though it isn't what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a "sound", you asked the two soon-to-be arguers whether they thought a "sound" should be defined as "acoustic vibrations" or "auditory experiences", they'd probably tell you to flip a coin. Only after the argument starts does the definition of a word become politically charged.

Feel the Meaning

You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept.

(alternate summary:)

You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept. When someone shouts, "Yikes! A tiger!", evolution would not favor an organism that thinks, "Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribemembers associate with their internal analogues of my own tiger concept and which aiiieeee CRUNCH CRUNCH GULP." So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself. People argue about the correct meaning of a label like "sound".

The Argument from Common Usage

You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say.

(alternate summary:)

You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand. When you each understand what is in the other's mind, you are done.

You pull out a dictionary in the middle of an empirical or moral argument. Dictionary editors are historians of usage, not legislators of language. If the common definition contains a problem - if "Mars" is defined as the God of War, or a "dolphin" is defined as a kind of fish, or "Negroes" are defined as a separate category from humans, the dictionary will reflect the standard mistake.

You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether "atheism" is a "religion" or whatever? If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?

You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.

Empty Labels

You use complex renamings to create the illusion of inference.

(alternate summary:)

You use complex renamings to create the illusion of inference. Is a "human" defined as a "mortal featherless biped"? Then write: "All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal." Looks less impressive that way, doesn't it?

Taboo Your Words

When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.

(alternate summary:)

If Albert and Barry aren't allowed to use the word "sound", then Albert will have to say "A tree falling in a deserted forest generates acoustic vibrations", and Barry will say "A tree falling in a deserted forest generates no auditory experiences". When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.

Replace the Symbol with the Substance

Description of the technique.

(alternate summary:)

The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about.

(alternate summary:)

The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about. What actually goes on in schools once you stop calling it "education"? What's a degree, once you stop calling it a "degree"? If a coin lands "heads", what's its radial orientation? What is "truth", if you can't say "accurate" or "correct" or "represent" or "reflect" or "semantic" or "believe" or "knowledge" or "map" or "real" or any other simple term?

Fallacies of Compression

You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.

(alternate summary:)

You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket. It's part of a detective's ordinary work to observe that Carol wore red last night, or that she has black hair; and it's part of a detective's ordinary work to wonder if maybe Carol dyes her hair. But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.

Categorizing Has Consequences

You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension.

(alternate summary:)

You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension. In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.

Sneaking in Connotations

You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations.

(alternate summary:)

You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations. A "wiggin" is defined in the dictionary as a person with green eyes and black hair. The word "wiggin" also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn't in the dictionary. So you point to someone and say: "Green eyes? Black hair? See, told you he's a wiggin! Watch, next he's going to steal the silverware."

Arguing "By Definition"

You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition.

(alternate summary:)

You claim "X, by definition, is a Y!" On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition. You define "human" as a "featherless biped", and point to Socrates and say, "No feathers - two legs - he must be human!" But what you really care about is something else, like mortality. If what was in dispute was Socrates's number of legs, the other fellow would just reply, "Whaddaya mean, Socrates's got two legs? That's what we're arguing about in the first place!"

You claim "Ps, by definition, are Qs!" If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there's no point in arguing "Men, by definition, are mortal!" The main time you feel the need to tighten the vise by insisting that something is true "by definition" is when there's other information that calls the default inference into doubt.

You try to establish membership in an empirical cluster "by definition". You wouldn't feel the need to say, "Hinduism, by definition, is a religion!" because, well, of course Hinduism is a religion. It's not just a religion "by definition", it's, like, an actual religion. Atheism does not resemble the central members of the "religion" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion. That's why you've got to crush all opposition by pointing out that "Atheism is a religion" is true by definition, because it isn't true any other way.

Where to Draw the Boundary?

Your definition draws a boundary around things that don't really belong together.

(alternate summary:)

Your definition draws a boundary around things that don't really belong together. You can claim, if you like, that you are defining the word "fish" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be "wrong". Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.

Entropy, and Short Codes

Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?

(alternate summary:)

You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often. This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound "simpler". Which sounds more plausible, "God did a miracle" or "A supernatural universe-creating entity temporarily suspended the laws of physics"?

Mutual Information, and Density in Thingspace

You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences.

(alternate summary:)

You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences. Since green-eyed people are not more likely to have black hair, or vice versa, and they don't share any other characteristics in common, why have a word for "wiggin"?

Superexponential Conceptspace, and Simple Words

You draw an unsimple boundary without any reason to do so.

(alternate summary:)

You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious. If you don't present reasons to draw that particular boundary, trying to create an "arbitrary" word in that location is like a detective saying: "Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... but have we considered John Q. Wiffleheim as a suspect?"

Conditional Independence, and Naive Bayes

You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes.

(alternate summary:)

You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes. No way am I trying to summarize this one. Just read the blog post.

Words as Mental Paintbrush Handles

Visualize a "triangular lightbulb". What did you see?

(alternate summary:)

You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace. Visualize a "triangular lightbulb". What did you see?

Variable Question Fallacies

"Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?

(alternate summary:)

You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting. "Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?

37 Ways That Words Can Be Wrong

Contains summaries of the sequence of posts about the proper use of words.

Book IV: Mere Reality

O: Lawful Truth

Universal Fire

You can't change just one thing in the world and expect the rest to continue working as before.

Universal Law

In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality.

Is Reality Ugly?

There are three reasons why a world governed by math can still seem messy. First, we may not actually know the math. Secondly, even if we do know all of the math, we may not have enough computing power to do the full calculation. And finally, even if we did know all the math, and we could compute it, we still don't know where in the mathematical system we are living.

Beautiful Probability

Bayesians expect probability theory, and rationality itself, to be math. Self consistent, neat, even beautiful. This is why Bayesians think that Cox's theorems are so important.

Outside the Laboratory

Those who understand the map/territory distinction will integrate their knowledge, as they see the evidence that reality is a single unified process.

(alternate summary:)

Written regarding the proverb "Outside the laboratory, scientists are no wiser than anyone else." The case is made that if this proverb is in fact true, that's quite worrisome because it implies that scientists are blindly following scientific rituals without understanding why. In particular, it is argued that if a scientist is religious, e probably doesn't understand the foundations of science very well.

The Second Law of Thermodynamics, and Engines of Cognition

To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort. Engines of cognition are not so different from heat engines, though they manipulate entropy in a more subtle form than burning gasoline. So unless you can tell me which specific step in your argument violates the laws of physics by giving you true knowledge of the unseen, don't expect me to believe that a big, elaborate clever argument can do it either.

Perpetual Motion Beliefs

People learn under the traditional school regimen that the teacher tells you certain things, and you must believe them and recite them back; but if a mere student suggests a belief, you do not have to obey it. They map the domain of belief onto the domain of authority, and think that a certain belief is like an order that must be obeyed, but a probabilistic belief is like a mere suggestion. And when half-trained or tenth-trained rationalists abandon their art and try to believe without evidence just this once, they often build vast edifices of justification, confusing themselves just enough to conceal the magical steps. It can be quite a pain to nail down where the magic occurs - their structure of argument tends to morph and squirm away as you interrogate them. But there's always some step where a tiny probability turns into a large one - where they try to believe without evidence - where they step into the unknown, thinking, "No one can prove me wrong".

Searching for Bayes-Structure

If a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.

P: Reductionism 101

Dissolving the Question

This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.

Wrong Questions

Where the mind cuts against reality's grain, it generates wrong questions - questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.

Righting a Wrong Question

When you are faced with an unanswerable question - a question to which it seems impossible to even imagine an answer - there is a simple trick which can turn the question solvable. Instead of asking, "Why do I have free will?", try asking, "Why do I think I have free will?"

Mind Projection Fallacy

E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind's properties into the external world. the Mind Projection Fallacy generalizes as an error. It is in the argument over the real meaning of the word sound, and in the magazine cover of the monster carrying off a woman in the torn dress, and Kant's declaration that space by its very nature is flat, and Hume's definition of a priori ideas as those "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe"...

Probability is in the Mind

Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.

The Quotation is not the Referent

It's very easy to derive extremely wrong conclusions if you don't make a clear enough distinction between your beliefs about the world, and the world itself.

Qualitatively Confused

Using qualitative, binary reasoning may make it easier to confuse belief and reality; if we use probability distributions, the distinction is much clearer.

Think Like Reality

"Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."

Chaotic Inversion

Reductionism

We build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.

Explaining vs. Explaining Away

Apparently "the mere touch of cold philosophy", i.e., the truth, has destroyed haunts in the air, gnomes in the mine, and rainbows. This calls to mind a rather different bit of verse:

One of these things Is not like the others One of these things Doesn't belong

The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!

Fake Reductionism

There is a very great distinction between being able to see where the rainbow comes from, and playing around with prisms to confirm it, and maybe making a rainbow yourself by spraying water droplets, versus some dour-faced philosopher just telling you, "No, there's nothing special about the rainbow. Didn't you hear? Scientists have explained it away. Just something to do with raindrops or whatever. Nothing to be excited about." I think this distinction probably accounts for a hell of a lot of the deadly existential emptiness that supposedly accompanies scientific reductionism.

Savanna Poets

Equations of physics aren't about strong emotions. They can inspire those emotions in the mind of a scientist, but the emotions are not as raw as the stories told about Jupiter (the god). And so it might seem that reducing Jupiter to a spinning ball of methane and ammonia takes away some of the poetry in those stories. But ultimately, we don't have to keep telling stories about Jupiter. It's not necessary for Jupiter to think and feel in order for us to tell stories, because we can always write stories with humans as its protagonists.

Q: Joy in the Merely Real

Joy in the Merely Real

If you can't take joy in things that turn out to be explicable, you're going to set yourself up for eternal disappointment. Don't worry if quantum physics turns out to be normal.

Joy in Discovery

It feels incredibly good to discover the answer to a problem that nobody else has answered. And we should enjoy finding answers. But we really shouldn't base our joy on the fact that nobody else has done it before. Even if someone else knows the answer to a puzzle, if you don't know it, it's still a mystery to you. And you should still feel joy when you discover the answer.

Bind Yourself to Reality

There are several reasons why it's worth talking about joy in the merely real in a discussion on reductionism. One is to leave a line of retreat. Another is to improve your own abilities as a rationalist by learning to invest your energy in the real world, and in accomplishing things here, rather than in a fantasy.

If You Demand Magic, Magic Won't Help

Magic (and dragons, and UFOs, and ...) get much of their charm from the fact that they don't actually exist. If dragons did exist, people would treat them like zebras; most people wouldn't bother to pay attention, but some scientists would get oddly excited about them. If we ever create dragons, or find aliens, we will have to learn to enjoy them, even though they happen to exist.

Mundane Magic

A list of abilities that would be amazing if they were magic, or if only a few people had them.

The Beauty of Settled Science

Most of the stuff reported in Science News is false, or at the very least, misleading. Scientific controversies are topics of such incredible difficulty that even people in the field aren't sure what's true. Read elementary textbooks. Study the settled science before you try to understand the outer fringes.

Amazing Breakthrough Day: April 1st

A proposal for a new holiday, in which journalists report on great scientific discoveries of the past as if they had just happened, and were still shocking.

Is Humanism A Religion-Substitute?

Trying to replace religion with humanism, atheism, or transhumanism doesn't work. If you try to write a hymn to the nonexistence of god, it will fail, because you are simply trying to imitate something that we don't really need to imitate. But that doesn't mean that the feeling of transcendence is something we should always avoid. After all, in a world in which religion never existed, people would still feel that same way.

Scarcity

Describes a few pieces of experimental evidence showing that objects or information which are believed to be in short supply are valued more than the same objects or information would be on their own.

The Sacred Mundane

There are a lot of bad habits of thought that have developed to defend religious and spiritual experience. They aren't worth saving, even if we discard the original lie. Let's just admit that we were wrong, and enjoy the universe that's actually here.

To Spread Science, Keep It Secret

People don't study science, in part, because they perceive it to be public knowledge. In fact, it's not; you have to study a lot before you actually understand it. But because science is thought to be freely available, people ignore it in favor of cults that conceal their secrets, even if those secrets are wrong. In fact, it might be better if scientific knowledge was hidden from anyone who didn't undergo the initiation ritual, and study as an acolyte, and wear robes, and chant, and...

Initiation Ceremony

Brennan is inducted into the Conspiracy

R: Physicalism 201

Hand vs. Fingers

When you pick up a cup of water, is it your hand that picks it up, or is it your fingers, thumb, and palm working together? Just because something can be reduced to smaller parts doesn't mean that the original thing doesn't exist.

Angry Atoms

It is very hard, without the benefit of hindsight, to understand just how it is that these little bouncing billiard balls called atoms, could ever combine in such a way as to make something angry. If you try to imagine this problem without understanding the idea of neurons, information processing, computing, etc you realize just how challenging reductionism actually is.

Heat vs. Motion

For a very long time, people had a detailed understanding of kinetics, and they had a detailed understanding of heat. They understood concepts such as momentum and elastic rebounds, as well as concepts such as temperature and pressure. It took an extraordinary amount of work in order to understand things deeply enough to make us realize that heat and motion were really the same thing.

Brain Breakthrough! It's Made of Neurons!

Eliezer's contribution to Amazing Breakthrough Day.

When Anthropomorphism Became Stupid

Anthropomorphism didn't become obviously wrong until we realized that the tangled neurons inside the brain were performing complex information processing, and that this complexity arose as a result of evolution.

A Priori

The facts that philosophers call "a priori" arrived in your brain by a physical process. Thoughts are existent in the universe; they are identical to the operation of brains. The "a priori" belief generator in your brain works for a reason.

Reductive Reference

Virtually every belief you have is not about elementary particle fields, which are (as far as we know) the actual reality. This doesn't mean that those beliefs aren't true. "Snow is white" does not mention quarks anywhere, and yet snow nevertheless is white. It's a computational shortcut, but it's still true.

Zombies! Zombies?

Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe.

Zombie Responses

A few more points on Zombies.

The Generalized Anti-Zombie Principle

The argument against zombies can be extended into a more general anti-zombie principle. But, figuring out what that more general principle is, is more difficult than it may seem.

GAZP vs. GLUT

Fleshes out the generalized anti-zombie principle a bit more, and describes the game "follow-the-improbability".

Belief in the Implied Invisible

That it's impossible even in principle to observe something sometimes isn't enough to conclude that it doesn't exist.

(alternate summary:)

If a spaceship goes over the cosmological horizon relative to us, so that it can no longer communicate with us, should we believe that the spaceship instantly ceases to exist?

Zombies: The Movie

A satirical script for a zombie movie, but not about the lurching and drooling kind. The philosophical kind.

Excluding the Supernatural

Don't rule out supernatural explanations because they're supernatural. Test them the way you would test any other hypothesis. And probably, you will find out that they aren't true.

Psychic Powers

Some of the previous post was incorrect. Psychic powers, if indeed they were ever discovered, would actually be strong evidence in favor of non-reductionism.

S: Quantum Physics and Many Worlds

Quantum Explanations

Quantum mechanics doesn't deserve its fearsome reputation.

(alternate summary:)

Quantum mechanics doesn't deserve its fearsome reputation. If you tell people something is supposed to be mysterious, they won't understand it. It's human intuitions that are "strange" or "weird"; physics itself is perfectly normal. Talking about historical erroneous concepts like "particles" or "waves" is just asking to confuse people; present the real, unified quantum physics straight out. The series will take a strictly realist perspective - quantum equations describe something that is real and out there. Warning: Although a large faction of physicists agrees with this, it is not universally accepted. Stronger warning: I am not even going to present non-realist viewpoints until later, because I think this is a major source of confusion.

Configurations and Amplitude

A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.

Joint Configurations

The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an experimentally testable fact that "Photon 1 here, photon 2 there" is the same configuration as "Photon 2 here, photon 1 there".

Distinct Configurations

Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.

Collapse Postulates

Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.

Decoherence is Simple

The idea that decoherence fails the test of Occam's Razor is wrong as probability theory.

Decoherence is Falsifiable and Testable

(Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.

Privileging the Hypothesis

Living in Many Worlds

The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.

Quantum Non-Realism

"Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually is a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"

If Many-Worlds Had Come First

If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.

Where Philosophy Meets Science

In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?

Thou Art Physics

If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you are within physics. Anything you control is necessarily controlled by physics.

Many Worlds, One Best Guess

Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds wins outright given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.

T: Science and Rationality

The Failures of Eld Science

A short story set in the same world as "Initiation Ceremony". Future physics students look back on the cautionary tale of quantum physics.

The Dilemma: Science or Bayes?

The failure of first-half-of-20th-century-physics was not due to straying from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.

Science Doesn't Trust Your Rationality

The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.

When Science Can't Help

If you have an idea, Science tells you to test it experimentally. If you spend 10 years testing the idea and the result comes out negative, Science slaps you on the back and says, "Better luck next time." If you want to spend 10 years testing a hypothesis that will actually turn out to be right, you'll have to try to do the thing that Science doesn't trust you to do: think rationally, and figure out the answer before you get clubbed over the head with it.

Science Isn't Strict Enough

Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.

Do Scientists Already Know This Stuff?

No. Maybe someday it will be part of standard scientific training, but for now, it's not, and the absence is visible.

No Safe Defense, Not Even Science

Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.

Changing the Definition of Science

Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.

Faster Than Science

Is it really possible to arrive at the truth faster than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to socially declare who was right, but if there weren't some people who could get it right in the absence of overwhelming experimental proof, science would be stuck.

Einstein's Speed

Albert was unusually good at finding the right theory in the presence of only a small amount of experimental evidence. Even more unusually, he admitted it - he claimed to know the theory was right, even in advance of the public proof. It's possible to arrive at the truth by thinking great high-minded thoughts of the sort that Science does not trust you to think, but it's a lot harder than arriving at the truth in the presence of overwhelming evidence.

That Alien Message

Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.

My Childhood Role Model

I looked up to the ideal of a Bayesian superintelligence, not Einstein.

Einstein's Superpowers

There's an unfortunate tendency to talk as if Einstein had superpowers - as if, even before Einstein was famous, he had an inherent disposition to be Einstein - a potential as rare as his fame and as magical as his deeds. Yet the way you acquire superpowers is not by being born with them, but by seeing, with a sudden shock, that they are perfectly normal.

Class Project

From the world of Initiation Ceremony. Brennan and the others are faced with their midterm exams.

(alternate summary:)

The students are given one month to develop a theory of quantum gravity.

Book V: Mere Goodness

U: Fake Preferences

Not for the Sake of Happiness (Alone)

Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.

Fake Selfishness

Many people who espouse a philosophy of selfishness aren't really selfish. If they were selfish, there are a lot more productive things to do with their time than espouse selfishness, for instance. Instead, individuals who proclaim themselves selfish do whatever it is they actually want, including altruism, but can always find some sort of self-interest rationalization for their behavior.

Fake Morality

Many people provide fake reasons for their own moral reasoning. Religious people claim that the only reason people don't murder each other is because of God. Selfish-ists provide altruistic justifications for selfishness. Altruists provide selfish justifications for altruism. If you want to know how moral someone is, don't look at their reasons. Look at what they actually do.

Fake Utility Functions

Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Detached Lever Fallacy

There is a lot of machinery hidden beneath the words, and rationalist's taboo is one way to make a step towards exposing it.

Dreams of AI Design

It can feel as though you understand how to build an AI, when really, you're still making all your predictions based on empathy. Your AI design will not work until you figure out a way to reduce the mental to the non-mental.

The Design Space of Minds-In-General

When people talk about "AI", they're talking about an incredibly wide range of possibilities. Having a word like "AI" is like having a word for everything which isn't a duck.

V: Value Theory

Where Recursive Justification Hits Bottom

Ultimately, when you reflect on how your mind operates, and consider questions like "why does occam's razor work?" and "why do I expect the future to be like the past?", you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.

My Kind of Reflection

A few key differences between Eliezer Yudkowsky's ideas on reflection and the ideas of other philosophers.

No Universally Compelling Arguments

Because minds are physical processes, it is theoretically possible to specify a mind which draws any conclusion in response to any argument. There is no argument that will convince every possible mind.

Created Already In Motion

There is no computer program so persuasive that you can run it on a rock. A mind, in order to be a mind, needs some sort of dynamic rules of inference or action. A mind has to be created already in motion.

Sorting Pebbles Into Correct Heaps

A parable about an imaginary society that has arbitrary, alien values.

2-Place and 1-Place Words

It is possible to talk about "sexiness" as a property of an observer and a subject. It is also equally possible to talk about "sexiness" as a property of a subject, as long as each observer can have a different process to determine how sexy someone is. Failing to do either of these will cause you trouble.

What Would You Do Without Morality?

If your own theory of morality was disproved, and you were persuaded that there was no morality, that everything was permissible and nothing was forbidden, what would you do? Would you still tip cabdrivers?

Changing Your Metaethics

Discusses the various lines of retreat that have been set up in the discussion on metaethics.

Could Anything Be Right?

You do know quite a bit about morality. It's not perfect information, surely, or absolutely reliable, but you have someplace to start. If you didn't, you'd have a much harder time thinking about morality than you do.

Morality as Fixed Computation

A clarification about Yudkowsky's metaethics.

Magical Categories

We underestimate the complexity of our own unnatural categories. This doesn't work when you're trying to build a FAI.

The True Prisoner's Dilemma

The standard visualization for the Prisoner's Dilemma doesn't really work on humans. We can't pretend we're completely selfish.

Sympathetic Minds

Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action - for example, a neuron that fires when you raise your hand or watch someone else raise theirs. We predictively model other minds by putting ourselves in their shoes, which is empathy. But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel. Like "boredom", the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI. Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.

High Challenge

Life should not always be made easier for the same reason that video games should not always be made easier. Think in terms of eliminating low-quality work to make way for high-quality work, rather than eliminating all challenge. One needs games that are fun to play and not just fun to win. Life's utility function is over 4D trajectories, not just 3D outcomes. Values can legitimately be over the subjective experience, the objective result, and the challenging process by which it is achieved - the traveller, the destination and the journey.

Serious Stories

Stories and lives are optimized according to rather different criteria. Advice on how to write fiction will tell you that "stories are about people's pain" and "every scene must end in disaster". I once assumed that it was not possible to write any story about a successful Singularity because the inhabitants would not be in any pain; but something about the final conclusion that the post-Singularity world would contain no stories worth telling seemed alarming. Stories in which nothing ever goes wrong, are painful to read; would a life of endless success have the same painful quality? If so, should we simply eliminate that revulsion via neural rewiring? Pleasure probably does retain its meaning in the absence of pain to contrast it; they are different neural systems. The present world has an imbalance between pain and pleasure; it is much easier to produce severe pain than correspondingly intense pleasure. One path would be to address the imbalance and create a world with more pleasures, and free of the more grindingly destructive and pointless sorts of pain. Another approach would be to eliminate pain entirely. I feel like I prefer the former approach, but I don't know if it can last in the long run.

Value is Fragile

An interesting universe, that would be incomprehensible to the universe today, is what the future looks like if things go right. There are a lot of things that humans value that if you did everything else right, when building an AI, but left out that one thing, the future would wind up looking dull, flat, pointless, or empty. Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

The Gift We Give To Tomorrow

How did love ever come into the universe? How did that happen, and how special was it, really?

W: Quantified Humanism

Scope Insensitivity

The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.

One Life Against the World

Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.

The Allais Paradox

(and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.

Zut Allais!

Offered choices between gambles, people make decision-theoretically inconsistent decisions.

Feeling Moral

The "Intuitions" Behind "Utilitarianism"

Our intuitions, the underlying cognitive tricks that we use to build our thoughts, are an indispensable part of our cognition. The problem is that many of those intuitions are incoherent, or are undesirable upon reflection. But if you try to "renormalize" your intuition, you wind up with what is essentially utilitarianism.

Ends Don't Justify Means (Among Humans)

Ethical Injunctions

Something to Protect

Many people only start to grow as a rationalist when they find something that they care about more than they care about rationality itself. It takes something really scary to cause you to override your intuitions with math.

When (Not) To Use Probabilities

When you don't have a numerical procedure to generate probabilities, you're probably better off using your own evolved abilities to reason in the presence of uncertainty.

Newcomb's Problem and Regret of Rationality

Newcomb's problem is a very famous decision theory problem in which the rational move appears to be consistently punished. This is the wrong attitude to take. Rationalists should win. If your particular ritual of cognition consistently fails to yield good results, change the ritual.

Book VI: Becoming Stronger

X: Yudkowsky's Coming of Age

My Childhood Death Spiral

My Best and Worst Mistake

When Eliezer went into his death spiral around intelligence, he would up making a lot of mistakes that later became very useful.

Raised in Technophilia

When Eliezer was quite young, it took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits.

A Prodigy of Refutation

Eliezer's skills at defeating other people's ideas led him to believe that his own (mistaken) ideas must have been correct.

The Sheer Folly of Callow Youth

Eliezer's big mistake was when he took a mysterious view of morality.

That Tiny Note of Discord

Eliezer started to dig himself out of his philosophical hole when he noticed a tiny inconsistency.

Fighting a Rearguard Action Against the Truth

When Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.

My Naturalistic Awakening

Eliezer actually looked back and realized his mistakes when he imagined the idea of an optimization process.

The Level Above Mine

There are people who have acquired more mastery over various fields than Eliezer has over his.

The Magnitude of His Own Folly

Eliezer considers his training as a rationalist to have started the day he realized just how awfully he had screwed up.

Beyond the Reach of God

Compare the world in which there is a God, who will intervene at some threshold, against a world in which everything happens as a result of physical laws. Which universe looks more like our own?

My Bayesian Enlightenment

The story of how Eliezer Yudkowsky became a Bayesian.

Y: Challenging the Difficult

Tsuyoku Naritai! (I Want To Become Stronger)

Don't be satisfied knowing you are biased; instead, aspire to become stronger, studying your flaws so as to remove them. There is a temptation to take pride in confessions, which can impede progress.

Tsuyoku vs. the Egalitarian Instinct

There may be evolutionary psychological factors that encourage modesty and mediocrity, at least in appearance; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.

Trying to Try

As a human, if you try to try something, you will put much less work into it than if you try something.

Use the Try Harder, Luke

A fictional exchange between Mark Hamill and George Lucas over the scene in Empire Strikes Back where Luke Skywalker attempts to lift his X-wing with the force.

On Doing the Impossible

A lot of projects seem impossible, meaning that we don't immediately see a way to do them. But after working on them for a long time, they start to look merely extremely difficult.

Make an Extraordinary Effort

It takes an extraordinary amount of rationality before you stop making stupid mistakes. Doing better requires making extraordinary efforts.

Shut up and do the impossible!

The ultimate level of attacking a problem is the point at which you simply shut up and solve the impossible problem.

Final Words

The conclusion of the Beisutsukai series.

Z: The Craft and the Community

Raising the Sanity Waterline

Behind every particular failure of social rationality is a larger and more general failure of social rationality; even if all religious content were deleted tomorrow from all human minds, the larger failures that permit religion would still be present. Religion may serve the function of an asphyxiated canary in a coal mine - getting rid of the canary doesn't get rid of the gas. Even a complete social victory for atheism would only be the beginning of the real work of rationalists. What could you teach people without ever explicitly mentioning religion, that would raise their general epistemic waterline to the point that religion went underwater?

A Sense That More Is Possible

The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible. The level of expertise that most rationalists strive to develop is not on a par with the skills of a professional mathematician - more like that of a strong casual amateur. Self-proclaimed "rationalists" don't seem to get huge amounts of personal mileage out of their craft, and no one sees a problem with this. Yet rationalists get less systematic training in a less systematic context than a first-dan black belt gets in hitting people.

Epistemic Viciousness

An essay by Gillian Russell on "Epistemic Viciousness in the Martial Arts" generalizes amazingly to possible and actual problems with building a community around rationality. Most notably the extreme dangers associated with "data poverty" - the difficulty of testing the skills in the real world. But also such factors as the sacredness of the dojo, the investment in teachings long-practiced, the difficulty of book learning that leads into the need to trust a teacher, deference to historical masters, and above all, living in data poverty while continuing to act as if the luxury of trust is possible.

Schools Proliferating Without Evidence

The branching schools of "psychotherapy", another domain in which experimental verification was weak (nonexistent, actually), show that an aspiring craft lives or dies by the degree to which it can be tested in the real world. In the absence of that testing, one becomes prestigious by inventing yet another school and having students, rather than excelling at any visible performance criterion. The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness - that there was a family of measures that by golly did validate well against each other. The act of creating a new measurement creates new science; if it's a good measurement, you get good science.

3 Levels of Rationality Verification

How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, and organizational. A "reputational" test is some real-world problem that tests the ability of a teacher or a school (like running a hedge fund, say) - "keeping it real", but without being able to break down exactly what was responsible for success. An "experimental" test is one that can be run on each of a hundred students (such as a well-validated survey). An "organizational" test is one that can be used to preserve the integrity of organizations by validating individuals or small groups, even in the face of strong incentives to game the test. The strength of solution invented at each level will determine how far the craft of rationality can go in the real world.

Why Our Kind Can't Cooperate

The atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd, aka "the nonconformist cluster", seems to be stunningly bad at coordinating group projects. There are a number of reasons for this, but one of them is that people are as reluctant to speak agreement out loud, as they are eager to voice disagreements - the exact opposite of the situation that obtains in more cohesive and powerful communities. This is not rational either! It is dangerous to be half a rationalist (in general), and this also applies to teaching only disagreement but not agreement, or only lonely defiance but not coordination. The pseudo-rationalist taboo against expressing strong feelings probably doesn't help either.

Tolerate Tolerance

One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flawed thinking. This makes it very important to tolerate other people's tolerance - to avoid rejecting them because they tolerate people you wouldn't - since otherwise we must all have exactly the same standards of tolerance in order to work together, which is unlikely. Even if someone has a nice word to say about complete lunatics and crackpots - so long as they don't literally believe the same ideas themselves - try to be nice to them? Intolerance of tolerance corresponds to punishment of non-punishers, a very dangerous game-theoretic idiom that can lock completely arbitrary systems in place even when they benefit no one at all.

Your Price for Joining

The game-theoretical puzzle of the Ultimatum game has its reflection in a real-world dilemma: How much do you demand that an existing group adjust toward you, before you will adjust toward it? Our hunter-gatherer instincts will be tuned to groups of 40 with very minimal administrative demands and equal participation, meaning that we underestimate the inertia of larger and more specialized groups and demand too much before joining them. In other groups this resistance can be overcome by affective death spirals and conformity, but rationalists think themselves too good for this - with the result that people in the nonconformist cluster often set their joining prices way way way too high, like an 50-way split with each player demanding 20% of the money. Nonconformists need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness. If an issue isn't worth personally fixing by however much effort it takes, it's not worth a refusal to contribute.

Can Humanism Match Religion's Output?

Anyone with a simple and obvious charitable project - responding with food and shelter to a tidal wave in Thailand, say - would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists. For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits. Can no rationalist match the motivation that comes from the irrational fear of Hell? Or does the real story have more to do with the motivating power of physically meeting others who share your cause, and group norms of participating?

Church vs. Taskforce

Churches serve a role of providing community - but they aren't explicitly optimized for this, because their nominal role is different. If we desire community without church, can we go one better in the course of deleting religion? There's a great deal of work to be done in the world; rationalist communities might potentially organize themselves around good causes, while explicitly optimizing for community.

Rationality: Common Interest of Many Causes

Many causes benefit particularly from the spread of rationality - because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization. In the case of my own work this effect was strong enough that after years of bogging down I threw up my hands and explicitly recursed on creating rationalists. If such causes can come to terms with not individually capturing all the rationalists they create, then they can mutually benefit from mutual effort on creating rationalists. This cooperation may require learning to shut up about disagreements between such causes, and not fight over priorities, except in specialized venues clearly marked.

Helpless Individuals

When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all. And in fact, the vast majority of large modern-day institutions simply fail to exist in the first place. This is why funding of Science is largely through money thrown at Science rather than donations from individuals - research isn't a good emotional fit for the rare problems that individuals can manage to coordinate on. In fact very few things are, which is why e.g. 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests.

Money: The Unit of Caring

Omohundro's resource balance principle implies that the inside of any approximately rational system has a common currency of expected utilons. In our world, this common currency is called "money" and it is the unit of how much society cares about something - a brutal yet obvious point. Many people, seeing a good cause, would prefer to help it by donating a few volunteer hours. But this avoids the tremendous gains of comparative advantage, professional specialization, and economies of scale - the reason we're not still in caves, the only way anything ever gets done in this world, the tools grownups use when anyone really cares. Donating hours worked within a professional specialty and paying-customer priority, whether directly, or by donating the money earned to hire other professional specialists, is far more effective than volunteering unskilled hours.

Purchase Fuzzies and Utilons Separately

Wealthy philanthropists typically make the mistake of trying to purchase warm fuzzy feelings, status among friends, and actual utilitarian gains, simultaneously; this results in vague pushes along all three dimensions and a mediocre final result. It should be far more effective to spend some money/effort on buying altruistic fuzzies at maximum optimized efficiency (e.g. by helping people in person and seeing the results in person), buying status at maximum efficiency (e.g. by donating to something sexy that you can brag about, regardless of effectiveness), and spending most of your money on expected utilons (chosen through sheer cold-blooded shut-up-and-multiply calculation, without worrying about status or fuzzies).

Bystander Apathy

The bystander effect is when groups of people are less likely to take action than an individual. There are a few explanations for why this might be the case.

Collective Apathy and the Internet

The causes of bystander apathy are even worse on the Internet. There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination.

Incremental Progress and the Valley

The optimality theorems for probability theory and decision theory, are for perfect probability theory and decision theory. There is no theorem that incremental changes toward the ideal, starting from a flawed initial form, must yield incremental progress at each step along the way. Since perfection is unattainable, why dare to try for improvement? But my limited experience with specialized applications suggests that given enough progress, one can achieve huge improvements over baseline - it just takes a lot of progress to get there.

Bayesians vs. Barbarians

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory. There's a certain concept of "rationality" which says that the rationalists inevitably lose, because the Barbarians believe in a heavenly afterlife if they die in battle, while the rationalists would all individually prefer to stay out of harm's way. So the rationalist civilization is doomed; it is too elegant and civilized to fight the savage Barbarians... And then there's the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...

Beware of Other-Optimizing

Aspiring rationalists often vastly overestimate their own ability to optimize other people's lives. They read nineteen webpages offering productivity advice that doesn't work for them... and then encounter the twentieth page, or invent a new method themselves, and wow, it really works - they've discovered the true method. Actually, they've just discovered the one method in twenty that works for them, and their confident advice is no better than randomly selecting one of the twenty blog posts. Other-Optimizing is exceptionally dangerous when you have power over the other person - for then you'll just believe that they aren't trying hard enough.

Practical Advice Backed By Deep Theories

Knowledge of this heuristic might be useful in fighting akrasia.

(alternate summary:)

Practical advice is genuinely much, much more useful when it's backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted. (Listed in increasing order of difficulty.) Stripping out the theories and giving the mere advice alone wouldn't have nearly the same impact or even the same message; and oddly enough, translating experiments and math into practical advice seems to be a rare niche activity relative to academia. If there's a distinctive LW style, this is it.

The Sin of Underconfidence

When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That's what makes a lot of cognitive subtasks so troublesome - you know you're biased but you're not sure how much, and if you keep tweaking you may overcorrect. The danger of underconfidence (overcorrecting for overconfidence) is that you pass up opportunities on which you could have been successful; not challenging difficult enough problems; losing forward momentum and adopting defensive postures; refusing to put the hypothesis of your inability to the test; losing enough hope of triumph to try hard enough to win. You should ask yourself "Does this way of thinking make me stronger, or weaker?"

Go Forth and Create the Art!

I've developed primarily the art of epistemic rationality, in particular, the arts required for advanced cognitive reductionism... arts like distinguishing fake explanations from real ones and avoiding affective death spirals. There is much else that needs developing to create a craft of rationality - fighting akrasia; coordinating groups; teaching, training, verification, and becoming a proper experimental science; developing better introductory literature... And yet it seems to me that there is a beginning barrier to surpass before you can start creating high-quality craft of rationality, having to do with virtually everyone who tries to think lofty thoughts going instantly astray, or indeed even realizing that a craft of rationality exists and that you ought to be studying cognitive science literature to create it. It's my hope that my writings, as partial as they are, will serve to surpass this initial barrier. The rest I leave to you.