User:PeerInfinity/Scripts/SyncArticleLinks.php/ArticleSummaries.txt

From Lesswrongwiki
< User:PeerInfinity‎ | Scripts‎ | SyncArticleLinks.php
Revision as of 14:05, 21 October 2009 by PeerInfinity (talk | contribs) (Created page with ' =====[http://lesswrong.com/lw/gn/the_martial_art_of_rationality/ The Martial Art of Rationality]===== Basic introduction of the metaphor and some of its consequences. =====[ht...')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

The Martial Art of Rationality

Basic introduction of the metaphor and some of its consequences.

Why truth? And...

You have an instrumental motive to care about the truth of your beliefs about anything you care about.

(alternate summary:)

You have an instrumental motive to care about the truth of your beliefs about anything you care about.

The Third Alternative

on not skipping the step of looking for additional alternatives

Correspondence Bias

, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

Hindsight bias

describes the tendency to seem much more likely in hindsight than could have been predicted beforehand.

Positive Bias: Look Into the Dark

is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.

Conjunction Controversy (Or, How They Nail It Down)

of a particular study design. Debiasing won't be as simple as practicing specific questions, it requires certain general habits of thought.

Burdensome Details

as practicing specific questions, it requires certain general habits of thought.

We Change Our Minds Less Often Than We Think

— we all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.

Do We Believe Everything We're Told?

— Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.

Illusion of Transparency: Why No One Understands You

— Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.

Fake Justification

their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Evolutions Are Stupid (But Work Anyway)

evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.

Natural Selection's Speed Limit and Complexity Bound

, tried to argue mathematically that there could be at most 25MB of meaningful information (or thereabouts) in the human genome, but computer simulations failed to bear out the mathematical argument. It does seem probably that evolution has some kind of speed limit and complexity bound - eminent evolutionary biologists seem to believe it, and in fact the Genome Project discovered only 25,000 genes in the human genome - but this particular math may not be the correct argument.

The Tragedy of Group Selectionism

a tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.

Thou Art Godshatter

describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.

Evolving to Extinction

Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool. It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to evolve to extinction.

(alternate summary:)

it is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.

Not for the Sake of Happiness (Alone)

tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.

The Hidden Complexity of Wishes

all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Lost Purposes

on noticing when you're still doing something that has become disconnected from its original purpose

The Affect Heuristic

— Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.

Evaluability (And Cheap Holiday Shopping)

— It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.

Unbounded Scales, Huge Jury Awards, & Futurism

— Without a metric for comparison, estimates of, e.g., what sorts of punative damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.

Fake Fake Utility Functions

to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Fake Utility Functions

describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Guardians of the Truth

and Guardians of Ayn Rand

Is Reality Ugly?

, and Beautiful Probability

Trust in Math

, and Trust in Bayes

Zut Allais!

followups) — Offered choices between gambles, people make decision-theoretically inconsistent decisions.

Allais Malaise

) — Offered choices between gambles, people make decision-theoretically inconsistent decisions.

Words as Hidden Inferences

The mere presence of words can influence thinking, sometimes misleading it.

How An Algorithm Feels From Inside

(see also the wiki page)

(alternate summary:)

and Dissolving the Question - setting up the problem.

Disputing Definitions

An example of how the technique helps.

Taboo Your Words

and Replace the Symbol with the Substance - Description of the technique.

Replace the Symbol with the Substance

Description of the technique.

Perpetual Motion Beliefs

and Searching for Bayes-Structure

Dissolving the Question

on Less Wrong, aspiring reductionists should try to solve it on their own.

(alternate summary:)

- this is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.

(alternate summary:)

setting up the problem.

Wrong Questions

is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own.

Explaining vs. Explaining Away

- elementary reductionism.

Fake Reductionism

. It takes a detailed step-by-step walkthrough.

Initiation Ceremony

Brennan is inducted into the Conspiracy

Hand vs. Fingers

and Explaining vs. Explaining Away - elementary reductionism.

Zombies! Zombies?

, Eliezer Yudkowsky

Zombie Responses

, Eliezer Yudkowsky

The Generalized Anti-Zombie Principle

, Eliezer Yudkowsky

(alternate summary:)

by [Eliezer_Yudkowsky]]

GAZP vs. GLUT

, Eliezer Yudkowsky

(alternate summary:)

by [Eliezer_Yudkowsky]]

Zombies: The Movie

, Eliezer Yudkowsky

The Failures of Eld Science

Jeffreyssai explains that rationalists should be fast.

(alternate summary:)

(prerequisite: Quantum Physics)

(alternate summary:)

Fictional portrayal of a potential rationality dojo.

The Dilemma: Science or Bayes?

and Science Doesn't Trust Your Rationality

(alternate summary:)

and Science Doesn't Trust Your Rationality

Do Scientists Already Know This Stuff?

and No Safe Defense, Not Even Science

Timeless Causality

and Timeless Control (from The Quantum Physics Sequence)

Class Project

The students are given one month to develop a theory of quantum gravity.

Timeless Control

(from The Quantum Physics Sequence)

Heading Toward Morality

to August 22 2008, albeit with a good deal of related material before and after.

Probability is Subjectively Objective

, and Qualitatively Confused

Anthropomorphic Optimism

you shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.

Inseparably Right; or, Joy in the Merely Good
as such, its arguments ground in "On reflection, don't you think this is what you would actually want (for yourself and others)?"
Invisible Frameworks

, albeit with a good deal of related material before and after.

The Ritual

Jeffreyssai carefully undergoes a crisis of faith.

(alternate summary:)

(short story)

Logical or Connectionist AI?

(The correct answer being "Wrong!")

Failure By Analogy

and Failure By Affective Analogy

The Fun Theory Sequence

describes some of the many complex considerations that determine what sort of happiness we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.

Cynicism in Ev-Psych (and Econ?)

and The Evolutionary-Cognitive Boundary

About Less Wrong

, he mentioned two topics that deserved a moratorium until the end of April 2009. These are The Singularity and Artificial General Intelligence. In discussions, these are often referred to as "The Topics that Must Not be Named". Occasionally you'll also see "The Institute that Must Not be Named." This is presumably SIAI (The Singularity Institute for Artificial Intelligence)

Epistemic Viciousness

, and do attempts at rationality training run into the same problem?

Never Leave Your Room

by Yvain, and Cached Selves by Salamon and Rayhawk.

Cached Selves

by Salamon and Rayhawk.

Newcomb's Problem standard positions

by Eliezer Yudkowsky

Extreme Rationality: It's Not That Great

" which essentially answered "Not on the present state of the Art"

The Unfinished Mystery of the Shangri-La Diet

and Akrasia and Shangri-La

Towards a New Decision Theory

by Wei Dai.

Privileging the Hypothesis

(and its requisites, like Locating the hypothesis)