Difference between revisions of "Less Wrong/2007 Articles/Summaries"

From Lesswrongwiki
Jump to: navigation, search
(Feeling Rational)
(Consolidated Nature of Morality Thread)
Line 185: Line 185:
 
=====[http://lesswrong.com/lw/ho/consolidated_nature_of_morality_thread/ Consolidated Nature of Morality Thread]=====
 
=====[http://lesswrong.com/lw/ho/consolidated_nature_of_morality_thread/ Consolidated Nature of Morality Thread]=====
  
The next post will, in passing, distinguish between moral judgments and factual beliefs.  To keep that post from being overwhelmed by debates about the nature of morality, this present post is offered as a place for such debates.
+
Disputes about the nature of morality tend to overwhelm other discussions, so this post was intended to be a home for those tangential thoughts.
  
Examples of questions to be discussed here include:  What is the difference between "is" and "ought" statements?  Why do some preferences seem voluntary?  Do children believe that God chooses what is moral? Is there a systematic direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does [[Truth|Tarski's definition of truth]] extend to moral statements?  If you were physically altered to prefer killing, would "killing is good" become true?  If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker than other claims?  What are the referents of moral claims, or are they empty of content?  Are there "pure" ought-statements, or do they all have is-statements mixed into them?  Are there pure aesthetic judgments or preferences?
+
Examples of questions to be discussed here include:  What is the difference between "is" and "ought" statements?  Why do some preferences seem voluntary?  Do children believe God can choose what is moral? Is there a systematic direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does [[Truth|Tarski's definition of truth]] extend to moral statements?  If you were physically altered to prefer killing, would "killing is good" become true?  If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker than other claims?  What are the referents of moral claims, or are they empty of content?  Are there "pure" ought-statements, or do they all have is-statements mixed into them?  Are there pure aesthetic judgments or preferences?
  
 
=====[http://lesswrong.com/lw/hp/feeling_rational/ Feeling Rational]=====
 
=====[http://lesswrong.com/lw/hp/feeling_rational/ Feeling Rational]=====

Revision as of 22:37, 2 June 2011

Some Claims Are Just Too Extraordinary

Publications in peer-reviewed scientific journals are more worthy of trust than what you detect with your own ears and eyes.

(alternate summary:)

Certain repeated science experiments imply bayesian priors so extreme that you should believe scientific consensus above evidence from your own eyes, when they conflict.

Outside the Laboratory

Outside the laboratory: those who understand the map/territory distinction will *integrate* their knowledge, as they see the evidence that reality is a single unified process.

(alternate summary:)

Written regarding the proverb "Outside the laboratory, scientists are no wiser than anyone else." The case is made that if this proverb is in fact true, that's quite worrisome because it implies that scientists are blindly following scientific rituals without understanding why. In particular, it is argued that if a scientist is religious, they probably don't understand the foundations of science very well.

Politics is the Mind-Killer

Beware in your discussions that for clear evolutionary reasons, people have great difficulty being rational about current political issues.

(alternate summary:)

People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, food or let you kill your hated rival. If you must talk about politics (for the purposes of teaching rationality) use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it - but don't blame it explicitly on the whole Republican Party (Democratic/Liberal/Conservative/Nationalist).

Just Lose Hope Already

Admit when the evidence goes against you, else things can get a whole lot worse.

(alternate summary:)

Casey Serin owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states. The sad part is that he hasn't given up - hasn't declared bankruptcy, and just attempted to purchase another house. While this behavior seems merely stupid, it recalls Merton and Scholes of Long-Term Capital Management who made 40% profits for three years and then lost it all when they overleveraged. Each profession has rules on how to be successful which makes rationality seem unlikely to help greatly in life. Yet it seems that one of the greater skills is not being stupid, which rationality does help with.

You Are Not Hiring the Top 1%

Interviewees represent a selection bias on the pool skewed toward those who are not successful or happy in their current jobs.

(alternate summary:)

Software companies may see themselves as being very selective about who they hire. Out of 200 applicants, they may hire just one or two. However, that doesn't necessarily mean that they're hiring the top 1%. The programmers who weren't hired are likely to apply for jobs somewhere else. Overall, the worst programmers will apply for many more jobs over the course of their careers than the best. So programmers who are applying for a particular job are not representative of programmers as a whole. This phenomenon probably shows up in other places as well.

Policy Debates Should Not Appear One-Sided

Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.

(alternate summary:)

Robin Hanson proposed a "banned products shop" where things that the government ordinarily would ban are sold. Eliezer responded that this would probably cause at least one stupid and innocent person to die. He became surprised when people inferred from this remark that he was against Robin's idea. Policy questions are complex actions with many consequences. Thus they should only rarely appear one-sided to an objective observer. A person's intelligence is largely a product of circumstances they cannot control. Eliezer argues for cost-benefit analysis instead of traditional libertarian ideas of tough-mindedness (people who do stupid things deserve their consequences).

Burch's Law

Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.

(alternate summary:)

Just because your ethics require an action doesn't mean the universe will exempt you from the consequences. Manufactured cars kill an estimated 1.2 million people per year worldwide. (Roughly 2% of the annual planetary death rate.) Not everyone who dies in an automobile accident is someone who decided to drive a car. The tally of casualties includes pedestrians. It includes minor children who had to be pushed screaming into the car on the way to school. And yet we still manufacture automobiles, because, well, we're in a hurry. The point is that the consequences don't change no matter how good the ethical justification sounds.

The Scales of Justice, the Notebook of Rationality

People have an irrational tendency to simplify their assessment of things into how good or bad they are without considering that the things in question may have many distinct and unrelated attributes.

(alternate summary:)

In non-binary answer spaces, you can't add up pro and con arguments along one dimension without risk of getting important factual questions wrong.

Blue or Green on Regulation?

Both sides are often right in describing the terrible things that will happen if we take the other side's advice; the universe is "unfair", terrible things are going to happen regardless of what we do, and it's our job to trade off for the least bad outcome.

(alternate summary:)

In a rationalist community, it should not be necessary to talk in the usual circumlocutions when talking about empirical predictions. We should know that people think of arguments as soldiers and recognize the behavior in our selves. When you think about all the truth values around you come to see that much of what the Greens said about the downside of the Blue policy was true - that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it. And imagine that most of what the Blues said about the downside of the Green policy was also true - that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.

(alternate summary:)

Burch's law isn't a soldier-argument for regulation; estimating the appropriate level of regulation in each particular case is a superior third option.

Superstimuli and the Collapse of Western Civilization

As a side effect of evolution, super-stimuli exist, and as a result of economics, are getting and should continue to get worse.

(alternate summary:)

At least 3 people have died by playing online games non-stop. How is it that a game is so enticing that after 57 straight hours playing, a person would rather spend the next hour playing the game over sleeping or eating? A candy bar is superstimulus, it corresponds overwhelmingly well to the EEA healthy food characteristics of sugar and fat. If people enjoy these things, the market will respond to provide as much of it as possible, even if other considerations make it undesirable.

Useless Medical Disclaimers

Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having there, maybe we ought to tax those people.

(alternate summary:)

Eliezer complains about a disclaimer he had to sign before getting toe surgery because it didn't give numerical probabilities for the possible negative outcomes it described. He guesses this is because of people afflicted with "innumeracy" who would over-interpret small numbers. He proposes a tax wherein folks are asked if they are innumerate and asked to pay in proportion to their innumeracy. This tax is revealed in the comments to be a state-sponsored lottery.

Archimedes's Chronophone

Consider the thought experiment where you communicate general thinking patterns which will lead to right answers, as opposed to pre-hashed content...

(alternate summary:)

Imagine that Archimedes of Syracuse invented a device that allows you to talk to him. Imagine the possibilities for improving history! Unfortunately, the device will not literally transmit your words - it transmits cognitive strategies. If you advise giving women the vote, it comes out as advising finding a wise tyrant, the Greek ideal of political discourse. Under such restrictions, what do you say to Archimedes?

Chronophone Motivations

If you want to really benefit humanity, do some original thinking, especially about areas of application, and directions of effort.

(alternate summary:)

The point of the chronophone dilemma is to make us think about what kind of cognitive policies are good to follow when you don't know your destination in advance.

Self-deception: Hypocrisy or Akrasia?

It is suggested that in some cases, people who say one thing and do another thing are not in fact "hypocrites". Instead they are suffering from "akrasia" or weakness of will. At the end, the problem of deciding what parts of a person's mind are considered their "real self" is discussed.

(alternate summary:)

If part of a person--for example, the verbal module--says it wants to become more rational, we can ally with that part even when weakness of will makes the person's actions otherwise; hypocrisy need not be assumed.

Tsuyoku Naritai! (I Want To Become Stronger)

Don't be satisfied knowing you are biased; instead, aspire to become stronger, studying your flaws so as to remove them. There is a temptation to take pride in confessions, which can impede progress.

Tsuyoku vs. the Egalitarian Instinct

There may be evolutionary psychological factors that encourage modesty and mediocrity, at least in appearance; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.

"Statistical Bias"

There are two types of error, systematic error, and random variance error; by repeating experiments you can average out and drive down the variance error.

Useful Statistical Biases

If you know an estimator has high variance, you can intentionally introduce bias by choosing a simpler hypothesis, and thereby lower expected variance while raising expected bias; sometimes total error is lower, hence the "bias-variance tradeoff". Keep in mind that while statistic bias might be useful, cognitive biases are not.

The Error of Crowds

Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing *square* error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.

(alternate summary)

Mean squared error drops when we average our predictions, but only because it uses a convex loss function. If you faced a concave loss function, you wouldn't isolate yourself from others, which casts doubt on the relevance of Jensen's inequality for rational communication. The process of sharing thoughts and arguing differences is not like taking averages.

The Majority Is Always Wrong

Anything worse than the majority opinion should get selected out, so the majority opinion is rarely strictly superior to existing alternatives.

Knowing About Biases Can Hurt People

Learning common biases won't help you obtain truth if you only use this knowledge to attack beliefs you don't like. Discussions about biases need to first do no harm by emphasizing motivated cognition, the sophistication effect, and dysrationalia, although even knowledge of these can backfire.

Debiasing as Non-Self-Destruction

Not being stupid seems like a more easily generalizable skill than breakthrough success. If debiasing is mostly about not being stupid, its benefits are hidden: lottery tickets not bought, blind alleys not followed, cults not joined. Hence, checking whether debiasing works is difficult, especially in the absence of organizations or systematized training.

"Inductive Bias"

Inductive bias is a systematic direction in belief revisions. The same observations could be evidence for or against a belief, depending on your prior. Inductive biases are more or less correct depending on how well they correspond with reality, so "bias" might not be the best description.

Suggested Posts

This is an obsolete "meta" post.

Futuristic Predictions as Consumable Goods

The Friedman Unit is named after Thomas Friedman who called "the next six months" the critical period in Iraq eight times between 2003 and 2007. This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.

Marginally Zero-Sum Efforts

After a point, labeling a problem as "important" is a commons problem. Rather than increasing the total resources devoted to important problems, resources are taken from other projects. Some grants proposals need to be written, but eventually this process becomes zero- or negative-sum on the margin.

Priors as Mathematical Objects

As a mathematical object, a Bayesian "prior" is a probability distribution over sequences of observations. That is, the prior assigns a probability to every possible sequence of observations. In principle, you could then use the prior to compute the probability of any event by summing the probabilities of all observation-sequences in which that event occurs. Formally, the prior is just a giant look-up table. However, an actual Bayesian reasoner wouldn't literally implement a giant look-up table. Nonetheless, the formal definition of a prior is sometimes convenient. For example, if you are uncertain about which distribution to use, you can just use a weighted sum of distributions, which directly gives another distribution.

Lotteries: A Waste of Hope

Some defend lottery-ticket buying as a rational purchase of fantasy. But you are occupying your valuable brain with a fantasy whose probability is nearly zero, wasting emotional energy. Without the lottery, people might fantasize about things that they can actually do, which might lead to thinking of ways to make the fantasy a reality. To work around a bias, you must first notice it, analyze it, and decide that it is bad. Many people, such as the lottery advocates above, often fail to complete the third step.

New Improved Lottery

If the opportunity to fantasize about winning justified the lottery, then a "new improved" lottery would be even better. You would buy a nearly-zero chance to become a millionaire at any moment over the next five years. You could spend every moment imagining that you might become a millionaire at that moment.

Your Rationality is My Business

As a human, I have a proper interest in the future of human civilization, including the human pursuit of truth. That makes your rationality my business. The danger is that we will think that we can respond to irrationality with violence. Relativism is not the way to avoid this danger. Instead, commit to using only arguments and evidence, never violence, against irrational thinking.

Consolidated Nature of Morality Thread

Disputes about the nature of morality tend to overwhelm other discussions, so this post was intended to be a home for those tangential thoughts.

Examples of questions to be discussed here include: What is the difference between "is" and "ought" statements? Why do some preferences seem voluntary? Do children believe God can choose what is moral? Is there a systematic direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does Tarski's definition of truth extend to moral statements? If you were physically altered to prefer killing, would "killing is good" become true? If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker than other claims? What are the referents of moral claims, or are they empty of content? Are there "pure" ought-statements, or do they all have is-statements mixed into them? Are there pure aesthetic judgments or preferences?

Feeling Rational

Strong emotions can be rational. A rational belief that something good happened leads to rational happiness. But your emotions ought not to change your beliefs about events that do not depend causally on your emotions.

Universal Fire

You can't change just one thing in the world and expect the rest to continue working as before.

Universal Law

The same laws apply everywhere.

Think Like Reality

"Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."

Beware the Unsurprised
The Third Alternative

on not skipping the step of looking for additional alternatives

Third Alternatives for Afterlife-ism
Scope Insensitivity

The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.

One Life Against the World
Risk-Free Bonds Aren't
Correspondence Bias

Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

(alternate summary:)

Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way. The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior.

(alternate summary:)

also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

Are Your Enemies Innately Evil?
Open Thread
Two More Things to Unlearn from School
Making Beliefs Pay Rent (in Anticipated Experiences)

Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian", this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" not "What statements do I believe?"

Belief in Belief

Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon!" The remarkable thing is that they know in advance exactly which experimental results they shall have to excuse, indicating that some part of their mind knows what's really going on. And yet they may honestly believe they believe there's a dragon in the garage. They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon.

Bayesian Judo

You can have some fun with people whose anticipations get out of sync with what they believe they believe...

Professing and Cheering

On a panel on the compatibility of science and religion, a scientifically educated pagan panelist holds forth interminably on how she "believes" that Earth began with a giant primordial cow being born from the primordial abyss.

Belief as Attire

When you've stopped anticipating-as-if something, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.

Religion's Claim to be Non-Disprovable
The Importance of Saying "Oops"
Focus Your Uncertainty

A TV pundit finds they have only 100 minutes to spend on preparing to explain why one of three different possible events was fully predicted by their pet theory.

The Proper Use of Doubt
The Virtue of Narrowness

One way to fight cached patterns of thought is to focus on precise concepts.

(alternate summary:)

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.

You Can Face Reality
The Apocalypse Bet
Your Strength as a Rationalist

A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

I Defy the Data!
Absence of Evidence Is Evidence of Absence
Conservation of Expected Evidence
Update Yourself Incrementally
One Argument Against An Army
Hindsight bias

Describes the tendency to seem much more likely in hindsight than could have been predicted beforehand.

(alternate summary:)

Hindsight bias is a tendency to overestimate the a priori probability of an event that has actually happened. The data set that gets considered overemphasizes the scenario that did happen, while the model that needs to be constructed, of the a priori belief, should be indifferent to which of the options will actually get realized. From this model, you need to read out the probability of the specific event, but which event you'll read out shouldn't figure into the model itself.

(alternate summary:)

describes the tendency to seem much more likely in hindsight than could have been predicted beforehand.

Hindsight Devalues Science
Scientific Evidence, Legal Evidence, Rational Evidence
Is Molecular Nanotechnology "Scientific"?
Fake Explanations
Guessing the Teacher's Password

In schools, "education" often consists of having students memorize answers to specific questions (i.e., the "teacher's password"), rather than learning a predictive model that says what is and isn't likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don't do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result. SilasBarta 00:54, 13 April 2011 (UTC)

Science as Attire
Fake Causality
Semantic Stopsigns
Mysterious Answers to Mysterious Questions
The Futility of Emergence
Positive Bias: Look Into the Dark

The tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.

(alternate summary:)

is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.

Say Not "Complexity"
My Wild and Reckless Youth
Failing to Learn from History
Making History Available
Stranger Than History
Explain/Worship/Ignore?
"Science" as Curiosity-Stopper

Although science does have explanations for phenomena, it is not enough to simply say that "Science!" is responsible for how something works -- nor is it enough to appeal to something more specific like "electricity" or "conduction". Yet for many people, simply noting that "Science has an answer" is enough to make them no longer curious about how it works. In that respect, "Science" is no different from more blatant curiosity-stoppers like "God did it!" But you shouldn't let your interest die simply because someone else knows the answer (which is a rather strange heuristic anyway): You should only be satisfied with a predictive model, and how a given phenomenon fits into that model. SilasBarta 01:22, 13 April 2011 (UTC)

Absurdity Heuristic, Absurdity Bias
Availability

Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.

Why is the Future So Absurd?
Anchoring and Adjustment
The Crackpot Offer
Radical Honesty
We Don't Really Want Your Participation
Applause Lights
Rationality and the English Language
Human Evil and Muddled Thinking
Doublethink (Choosing to be Biased)
Why I'm Blooking
Planning Fallacy

We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.

(alternate summary:)

Planning Fallacy is a tendency to overestimate your efficiency in achieving a task. The data set you consider consists of simple cached ways in which you move about accomplishing the task, and lacks the unanticipated problems and more complex ways in which the process may unfold. As a result, the model fails to adequately describe the phenomenon, and the answer gets systematically wrong.

Kahneman's Planning Anecdote
Conjunction Fallacy

Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions, it requires certain general habits of thought.

Conjunction Controversy (Or, How They Nail It Down)
Burdensome Details
What is Evidence?
The Lens That Sees Its Flaws
How Much Evidence Does It Take?
Einstein's Arrogance
Occam's Razor
9/26 is Petrov Day
How to Convince Me That 2 + 2 = 3
The Bottom Line
What Evidence Filtered Evidence?
Rationalization
Recommended Rationalist Reading
A Rational Argument
We Change Our Minds Less Often Than We Think

We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.

(alternate summary:)

we all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.

Avoiding Your Belief's Real Weak Points
The Meditation on Curiosity
Singlethink
No One Can Exempt You From Rationality's Laws
A Priori
Priming and Contamination

Even slight exposure to a stimulus is enough to change the outcome of a decision or estimate. See also Never Leave Your Room by Yvain, and Cached Selves by Salamon and Rayhawk.

(alternate summary:)

Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with.

Do We Believe Everything We're Told?

Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.

(alternate summary:)

Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.

Cached Thoughts
The "Outside the Box" Box
Original Seeing

One way to fight cached patterns of thought is to focus on precise concepts.

How to Seem (and Be) Deep

Just find ways of violating cached expectations.

The Logical Fallacy of Generalization from Fictional Evidence

The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.

Hold Off On Proposing Solutions

Proposing Solutions Prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.

"Can't Say No" Spending
Congratulations to Paris Hilton
Pascal's Mugging: Tiny Probabilities of Vast Utilities
Illusion of Transparency: Why No One Understands You

Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.

(alternate summary:)

Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.

Self-Anchoring

Related to contamination and the illusion of transparancy, we "anchor" on our own experience and underadjust when trying to understand others.

Expecting Short Inferential Distances
Explainers Shoot High. Aim Low!
Double Illusion of Transparency
No One Knows What Science Doesn't Know
Why Are Individual IQ Differences OK?
Bay Area Bayesians Unite!
Motivated Stopping and Motivated Continuation
Torture vs. Dust Specks
A Case Study of Motivated Continuation
A Terrifying Halloween Costume
Fake Justification
An Alien God

Evolution is awesomely powerful, unbelievably stupid, incredibly slow, monomaniacally singleminded, irrevocably splintered in focus, blindly shortsighted, and itself a completely accidental process. If evolution were a god, it would not be Jehovah, but H. P. Lovecraft's Azathoth, the blind idiot God burbling chaotically at the center of everything.

The Wonder of Evolution

...is not how amazingly well it works, but that it works at all without a mind, brain, or the ability to think abstractly - that an entirely accidental process can produce complex designs. If you talk about how amazingly well evolution works, you're missing the point.

(alternate summary:)

The wonder of the first replicator was not how amazingly well it replicated, but that a first replicator could arise, at all, by pure accident, in the primordial seas of Earth. That first replicator would undoubtedly be devoured in an instant by a sophisticated modern bacterium. Likewise, the wonder of evolution itself is not how well it works, but that a brainless, accidentally occurring optimization process can work at all. If you praise evolution for being such a wonderfully intelligent Creator, you're entirely missing the wonderful thing about it.

Evolutions Are Stupid (But Work Anyway)

Evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.

(alternate summary:)

Modern evolutionary theory gives us a definite picture of evolution's capabilities. If you praise evolution one millimeter higher than this, you are not scoring points against creationists, you are just being factually inaccurate. In particular we can calculate the probability and time for advantageous genes to rise to fixation. For example, a mutation conferring a 3% advantage would have only a 6% probability of surviving, and if it did so, would take 875 generations to rise to fixation in a population of 500,000 (on average).

(alternate summary:)

evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.

Natural Selection's Speed Limit and Complexity Bound

Tried to argue mathematically that there could be at most 25MB of meaningful information (or thereabouts) in the human genome, but computer simulations failed to bear out the mathematical argument. It does seem probably that evolution has some kind of speed limit and complexity bound - eminent evolutionary biologists seem to believe it, and in fact the Genome Project discovered only 25,000 genes in the human genome - but this particular math may not be the correct argument.

Beware of Stephen J. Gould

A lot of people have gotten their grasp of evolutionary theory from Stephen J. Gould, a man who committed the moral equivalent of fraud in a way that is difficult to explain. At any rate, he severely misrepresented what evolutionary biologists believe, in the course of pretending to attack certain beliefs. One needs to clear from memory, as much as possible, not just everything that Gould positively stated but everything he seemed to imply the mainstream theory believed.

The Tragedy of Group Selectionism

A tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.

(alternate summary:)

Describes a key case where some pre-1960s evolutionary biologists went wrong by anthropomorphizing evolution - in particular, Wynne-Edwards, Allee, and Brereton among others believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat. Since evolution does not usually do this sort of thing, their rationale was group selection - populations that did this would survive better. But group selection is extremely difficult to make work mathematically, and an experiment under sufficiently extreme conditions to permit group selection, had rather different results.

(alternate summary:)

a tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.

Fake Selfishness
Fake Morality
Fake Optimization Criteria

Why study evolution? For one thing - it lets us see an alien optimization process up close - lets us see the real consequence of optimizing strictly for an alien optimization criterion like inclusive genetic fitness. Humans, who try to persuade other humans to do things their way, think that this policy criterion ought to require predators to restrain their breeding to live in harmony with prey; the true result is something that humans find less aesthetic.

Adaptation-Executers, not Fitness-Maximizers

A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor.

Evolutionary Psychology
Protein Reinforcement and DNA Consequentialism
Thou Art Godshatter

Describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.

(alternate summary:)

describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.

Terminal Values and Instrumental Values
Evolving to Extinction

Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool. It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to evolve to extinction.

(alternate summary:)

On how evolution could be responsible for the bystander effect.

(alternate summary:)

It is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to

(alternate summary:)

On how evolution could be responsible for the bystander effect.

(alternate summary:)

It is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.

(alternate summary:)

it is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.

No Evolutions for Corporations or Nanodevices

Price's Equation describes quantitatively how the change in a average trait, in each generation, is equal to the covariance between that trait and fitness. Such covariance requires substantial variation in traits, substantial variation in fitness, and substantial correlation between the two - and then, to get large cumulative selection pressures, the correlation must have persisted over many generations with high-fidelity inheritance, continuing sources of new variation, and frequent birth of a significant fraction of the population. People think of "evolution" as something that automatically gets invoked where "reproduction" exists, but these other conditions may not be fulfilled - which is why corporations haven't evolved, and nanodevices probably won't.

The Simple Math of Everything
Conjuring An Evolution To Serve You

If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs. Sounds logical, right? But this selection may actually favor the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens. Such breeding programs produce hens that must be housed in individual cages, or they will peck each other to death. Jeff Skilling of Enron fancied himself an evolution-conjurer - summoning the awesome power of evolution to work for him - and so, every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses...

Artificial Addition
Truly Part Of You
Not for the Sake of Happiness (Alone)

Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.

(alternate summary:)

tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.

Leaky Generalizations
The Hidden Complexity of Wishes
Lost Purposes

on noticing when you're still doing something that has become disconnected from its original purpose

Purpose and Pragmatism
The Affect Heuristic

Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.

(alternate summary:)

Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.

Evaluability (And Cheap Holiday Shopping)

It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.

(alternate summary:)

Is there a way to exploit human biases to give the impression of largess with cheap gifts? Yes. Humans compare the value/price of an object to other similar objects. A $399 Eee PC is cheap (because other laptops are more expensive), yet a $399 PS3 is expensive (because the alternatives are less expensive). To give the impression of expense in a gift chose a cheap class of item (say, a candle) and buy the most expensive one around.

(alternate summary:)

It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.

Unbounded Scales, Huge Jury Awards, & Futurism

Without a metric for comparison, estimates of, e.g., what sorts of punative damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.

(alternate summary:)

Without a metric for comparison, estimates of, e.g., what sorts of punative damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.

The Halo Effect

Positive qualities seem to correlate with each other, whether or not they actually do.

Superhero Bias
Mere Messiahs
Affective Death Spirals
Resist the Happy Death Spiral
Uncritical Supercriticality
Fake Fake Utility Functions
Fake Utility Functions

Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

(alternate summary:)

describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Evaporative Cooling of Group Beliefs
When None Dare Urge Restraint
The Robbers Cave Experiment
Misc Meta
Every Cause Wants To Be A Cult
Reversed Stupidity Is Not Intelligence

The world's greatest fool may say the Sun is shining, but that doesn't make it dark out. Stalin also believed that 2 + 2 = 4. Stupidity or human evil do not anticorrelate with truth. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.

Argument Screens Off Authority
Hug the Query

The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

Guardians of the Truth

Endorsing a concept of truth is not the same as endorsing a particular belief as eternally, absolutely, knowably true.

Guardians of the Gene Pool
Guardians of Ayn Rand
The Litany Against Gurus
Politics and Awful Art
Two Cult Koans
False Laughter
Effortless Technique
Zen and the Art of Rationality
The Amazing Virgin Pregnancy

A story in which Mary tells Joseph that God made her pregnant so Joseph won't realize she's been cheating on him with the village rabbi.

Asch's Conformity Experiment

The unanimous agreement of surrounding others can make subjects disbelieve (or at least, fail to report) what's right before their eyes. The addition of just one dissenter is enough to dramatically reduce the rates of improper conformity.

On Expressing Your Concerns

A way of breaking the conformity effect in some cases

Lonely Dissent
To Lead, You Must Stand Up
Cultish Countercultishness
My Strange Beliefs
End of 2007 articles