Difference between revisions of "User:PeerInfinity/Scripts/SyncArticleLinks.php/ArticleSummaries.txt"

From Lesswrongwiki
Jump to: navigation, search
(Created page with ' =====[http://lesswrong.com/lw/gn/the_martial_art_of_rationality/ The Martial Art of Rationality]===== Basic introduction of the metaphor and some of its consequences. =====[ht...')
 
 
Line 1: Line 1:
 +
  
 
=====[http://lesswrong.com/lw/gn/the_martial_art_of_rationality/ The Martial Art of Rationality]=====
 
=====[http://lesswrong.com/lw/gn/the_martial_art_of_rationality/ The Martial Art of Rationality]=====
Line 18: Line 19:
 
=====[http://lesswrong.com/lw/hz/correspondence_bias/ Correspondence Bias]=====
 
=====[http://lesswrong.com/lw/hz/correspondence_bias/ Correspondence Bias]=====
  
, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
+
also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
  
 
=====[http://lesswrong.com/lw/il/hindsight_bias/ Hindsight bias]=====
 
=====[http://lesswrong.com/lw/il/hindsight_bias/ Hindsight bias]=====
Line 27: Line 28:
  
 
is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
 
is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
 
=====[http://lesswrong.com/lw/jj/conjunction_controversy_or_how_they_nail_it_down/ Conjunction Controversy (Or, How They Nail It Down)]=====
 
 
of a particular study design. Debiasing [http://lesswrong.com/lw/jk/burdensome_details/ won't be as simple] as practicing specific questions, it requires certain general habits of thought.
 
 
=====[http://lesswrong.com/lw/jk/burdensome_details/ Burdensome Details]=====
 
 
as practicing specific questions, it requires certain general habits of thought.
 
  
 
=====[http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/ We Change Our Minds Less Often Than We Think]=====
 
=====[http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/ We Change Our Minds Less Often Than We Think]=====
  
we all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.
+
we all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.
  
 
=====[http://lesswrong.com/lw/k4/do_we_believe_everything_were_told/ Do We Believe Everything We're Told?]=====
 
=====[http://lesswrong.com/lw/k4/do_we_believe_everything_were_told/ Do We Believe Everything We're Told?]=====
  
Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
+
Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
  
 
=====[http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/ Illusion of Transparency: Why No One Understands You]=====
 
=====[http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/ Illusion of Transparency: Why No One Understands You]=====
  
Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
+
Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
 
 
=====[http://lesswrong.com/lw/kq/fake_justification/ Fake Justification]=====
 
 
 
'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 
  
 
=====[http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/ Evolutions Are Stupid (But Work Anyway)]=====
 
=====[http://lesswrong.com/lw/kt/evolutions_are_stupid_but_work_anyway/ Evolutions Are Stupid (But Work Anyway)]=====
  
 
evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
 
evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.
 
=====[http://lesswrong.com/lw/ku/natural_selections_speed_limit_and_complexity/ Natural Selection's Speed Limit and Complexity Bound]=====
 
 
, tried to argue mathematically that there could be at most 25MB of meaningful information (or thereabouts) in the human genome, but computer simulations failed to bear out the mathematical argument.  It does seem probably that evolution has some kind of speed limit and complexity bound - eminent evolutionary biologists seem to believe it, and in fact the [[wikipedia:Human Genome Project|Genome Project]] discovered only 25,000 genes in the human genome - but this particular math may not be the correct ''argument''.
 
  
 
=====[http://lesswrong.com/lw/kw/the_tragedy_of_group_selectionism/ The Tragedy of Group Selectionism]=====
 
=====[http://lesswrong.com/lw/kw/the_tragedy_of_group_selectionism/ The Tragedy of Group Selectionism]=====
Line 69: Line 54:
  
 
=====[http://lesswrong.com/lw/l5/evolving_to_extinction/ Evolving to Extinction]=====
 
=====[http://lesswrong.com/lw/l5/evolving_to_extinction/ Evolving to Extinction]=====
 
Contrary to a naive view that evolution works for the good of a species, evolution says that genes which outreproduce their alternative alleles increase in frequency within a gene pool.  It is entirely possible for genes which "harm" the species to outcompete their alternatives in this way - indeed, it is entirely possible for a species to ''evolve to extinction''.
 
 
(alternate summary:)
 
  
 
it is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to ''each other'', and so it is quite possible for a species to evolve to extinction.
 
it is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to ''each other'', and so it is quite possible for a species to evolve to extinction.
Line 79: Line 60:
  
 
tackles the [[Hollywood Rationality]] trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure.  An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that [http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/ ranges over anything, not just internal subjective experiences].
 
tackles the [[Hollywood Rationality]] trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure.  An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that [http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/ ranges over anything, not just internal subjective experiences].
 
=====[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ The Hidden Complexity of Wishes]=====
 
 
all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 
  
 
=====[http://lesswrong.com/lw/le/lost_purposes/ Lost Purposes]=====
 
=====[http://lesswrong.com/lw/le/lost_purposes/ Lost Purposes]=====
Line 90: Line 67:
 
=====[http://lesswrong.com/lw/lg/the_affect_heuristic/ The Affect Heuristic]=====
 
=====[http://lesswrong.com/lw/lg/the_affect_heuristic/ The Affect Heuristic]=====
  
Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.
+
Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.
  
 
=====[http://lesswrong.com/lw/lh/evaluability_and_cheap_holiday_shopping/ Evaluability (And Cheap Holiday Shopping)]=====
 
=====[http://lesswrong.com/lw/lh/evaluability_and_cheap_holiday_shopping/ Evaluability (And Cheap Holiday Shopping)]=====
  
It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
+
It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
  
 
=====[http://lesswrong.com/lw/li/unbounded_scales_huge_jury_awards_futurism/ Unbounded Scales, Huge Jury Awards, & Futurism]=====
 
=====[http://lesswrong.com/lw/li/unbounded_scales_huge_jury_awards_futurism/ Unbounded Scales, Huge Jury Awards, & Futurism]=====
  
Without a metric for comparison, estimates of, e.g., what sorts of punative damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.
+
Without a metric for comparison, estimates of, e.g., what sorts of punative damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.
 
 
=====[http://lesswrong.com/lw/lp/fake_fake_utility_functions/ Fake Fake Utility Functions]=====
 
 
 
to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 
  
 
=====[http://lesswrong.com/lw/lq/fake_utility_functions/ Fake Utility Functions]=====
 
=====[http://lesswrong.com/lw/lq/fake_utility_functions/ Fake Utility Functions]=====
  
 
describes the seeming fascination that many have with trying to compress morality down to a single principle.  The [http://lesswrong.com/lw/lp/fake_fake_utility_functions/ sequence leading up] to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 
describes the seeming fascination that many have with trying to compress morality down to a single principle.  The [http://lesswrong.com/lw/lp/fake_fake_utility_functions/ sequence leading up] to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].
 
=====[http://lesswrong.com/lw/lz/guardians_of_the_truth/ Guardians of the Truth]=====
 
 
and [http://lesswrong.com/lw/m1/guardians_of_ayn_rand/ Guardians of Ayn Rand]
 
 
=====[http://lesswrong.com/lw/ms/is_reality_ugly/ Is Reality Ugly?]=====
 
 
, and [http://lesswrong.com/lw/mt/beautiful_probability/ Beautiful Probability]
 
 
=====[http://lesswrong.com/lw/mu/trust_in_math/ Trust in Math]=====
 
 
, and [http://lesswrong.com/lw/na/trust_in_bayes/ Trust in Bayes]
 
 
=====[http://lesswrong.com/lw/mz/zut_allais/ Zut Allais!]=====
 
 
[http://lesswrong.com/lw/n1/allais_malaise/ followups]) — Offered choices between gambles, people make decision-theoretically inconsistent decisions.
 
 
=====[http://lesswrong.com/lw/n1/allais_malaise/ Allais Malaise]=====
 
 
) — Offered choices between gambles, people make decision-theoretically inconsistent decisions.
 
 
=====[http://lesswrong.com/lw/ng/words_as_hidden_inferences/ Words as Hidden Inferences]=====
 
 
The mere presence of words can influence thinking, sometimes misleading it.
 
 
=====[http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/ How An Algorithm Feels From Inside]=====
 
 
(see also the [[How_an_algorithm_feels|wiki page]])
 
 
(alternate summary:)
 
 
'' and [http://lesswrong.com/lw/of/dissolving_the_question/ Dissolving the Question] - setting up the problem.
 
 
=====[http://lesswrong.com/lw/np/disputing_definitions/ Disputing Definitions]=====
 
 
An example of how the technique helps.
 
 
=====[http://lesswrong.com/lw/nu/taboo_your_words/ Taboo Your Words]=====
 
 
and [http://lesswrong.com/lw/nv/replace_the_symbol_with_the_substance/ Replace the Symbol with the Substance] - Description of the technique.
 
 
=====[http://lesswrong.com/lw/nv/replace_the_symbol_with_the_substance/ Replace the Symbol with the Substance]=====
 
 
Description of the technique.
 
 
=====[http://lesswrong.com/lw/o6/perpetual_motion_beliefs/ Perpetual Motion Beliefs]=====
 
 
and [http://lesswrong.com/lw/o7/searching_for_bayesstructure/ Searching for Bayes-Structure]
 
  
 
=====[http://lesswrong.com/lw/of/dissolving_the_question/ Dissolving the Question]=====
 
=====[http://lesswrong.com/lw/of/dissolving_the_question/ Dissolving the Question]=====
  
on Less Wrong, aspiring reductionists should '''try to solve it on their own'''.
+
this is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
 
 
(alternate summary:)
 
 
 
''' - this is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.
 
 
 
(alternate summary:)
 
 
 
setting up the problem.
 
 
 
=====[http://lesswrong.com/lw/og/wrong_questions/ Wrong Questions]=====
 
 
 
is [http://lesswrong.com/lw/of/dissolving_the_question/ fully and completely dissolved] on Less Wrong, aspiring reductionists should '''try to solve it on their own'''.
 
  
 
=====[http://lesswrong.com/lw/oo/explaining_vs_explaining_away/ Explaining vs. Explaining Away]=====
 
=====[http://lesswrong.com/lw/oo/explaining_vs_explaining_away/ Explaining vs. Explaining Away]=====
  
'' - elementary [[reductionism]].
+
elementary [[reductionism]].
 
 
=====[http://lesswrong.com/lw/op/fake_reductionism/ Fake Reductionism]=====
 
 
 
.  It takes a detailed step-by-step walkthrough.
 
  
 
=====[http://lesswrong.com/lw/p1/initiation_ceremony/ Initiation Ceremony]=====
 
=====[http://lesswrong.com/lw/p1/initiation_ceremony/ Initiation Ceremony]=====
  
 
Brennan is inducted into the Conspiracy
 
Brennan is inducted into the Conspiracy
 
=====[http://lesswrong.com/lw/p2/hand_vs_fingers/ Hand vs. Fingers]=====
 
 
'' and ''[http://lesswrong.com/lw/oo/explaining_vs_explaining_away/ Explaining vs. Explaining Away]'' - elementary [[reductionism]].
 
 
=====[http://lesswrong.com/lw/p7/zombies_zombies/ Zombies! Zombies?]=====
 
 
, [[Eliezer Yudkowsky]]
 
 
=====[http://lesswrong.com/lw/p8/zombie_responses/ Zombie Responses]=====
 
 
, [[Eliezer Yudkowsky]]
 
 
=====[http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/ The Generalized Anti-Zombie Principle]=====
 
 
, [[Eliezer Yudkowsky]]
 
 
(alternate summary:)
 
 
by [Eliezer_Yudkowsky]]
 
 
=====[http://lesswrong.com/lw/pa/gazp_vs_glut/ GAZP vs. GLUT]=====
 
 
, [[Eliezer Yudkowsky]]
 
 
(alternate summary:)
 
 
by [Eliezer_Yudkowsky]]
 
 
=====[http://lesswrong.com/lw/pn/zombies_the_movie/ Zombies: The Movie]=====
 
 
, [[Eliezer Yudkowsky]]
 
  
 
=====[http://lesswrong.com/lw/q9/the_failures_of_eld_science/ The Failures of Eld Science]=====
 
=====[http://lesswrong.com/lw/q9/the_failures_of_eld_science/ The Failures of Eld Science]=====
Line 222: Line 99:
 
(alternate summary:)
 
(alternate summary:)
  
'' (prerequisite: [http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ Quantum Physics])
+
(prerequisite: [http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ Quantum Physics])
  
 
(alternate summary:)
 
(alternate summary:)
  
 
Fictional portrayal of a potential rationality dojo.
 
Fictional portrayal of a potential rationality dojo.
 
=====[http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ The Dilemma: Science or Bayes?]=====
 
 
and [http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality]
 
 
(alternate summary:)
 
 
and [http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/ Science Doesn't Trust Your Rationality]
 
 
=====[http://lesswrong.com/lw/qe/do_scientists_already_know_this_stuff/ Do Scientists Already Know This Stuff?]=====
 
 
and [http://lesswrong.com/lw/qf/no_safe_defense_not_even_science/ No Safe Defense, Not Even Science]
 
 
=====[http://lesswrong.com/lw/qr/timeless_causality/ Timeless Causality]=====
 
 
and [http://lesswrong.com/lw/r1/timeless_control/ Timeless Control] (from [http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ The Quantum Physics Sequence])
 
  
 
=====[http://lesswrong.com/lw/qt/class_project/ Class Project]=====
 
=====[http://lesswrong.com/lw/qt/class_project/ Class Project]=====
Line 251: Line 112:
  
 
(from [http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ The Quantum Physics Sequence])
 
(from [http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ The Quantum Physics Sequence])
 
=====[http://lesswrong.com/lw/rh/heading_toward_morality/ Heading Toward Morality]=====
 
 
to [http://lesswrong.com/lw/ta/invisible_frameworks/ August 22 2008], albeit with a good deal of related material before and after.
 
 
=====[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]=====
 
 
, and [http://lesswrong.com/lw/om/qualitatively_confused/ Qualitatively Confused]
 
  
 
=====[http://lesswrong.com/lw/st/anthropomorphic_optimism/ Anthropomorphic Optimism]=====
 
=====[http://lesswrong.com/lw/st/anthropomorphic_optimism/ Anthropomorphic Optimism]=====
  
 
you shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer.  It really isn't listening.
 
you shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer.  It really isn't listening.
 
=====[http://lesswrong.com/lw/sx/inseparably_right_or_joy_in_the_merely_good/ Inseparably Right; or, Joy in the Merely Good]=====
 
 
; as such, its arguments ground in "On reflection, don't you think this is what you would actually want (for yourself and others)?"
 
 
=====[http://lesswrong.com/lw/ta/invisible_frameworks/ Invisible Frameworks]=====
 
 
, albeit with a good deal of related material before and after.
 
  
 
=====[http://lesswrong.com/lw/us/the_ritual/ The Ritual]=====
 
=====[http://lesswrong.com/lw/us/the_ritual/ The Ritual]=====
  
 
Jeffreyssai carefully undergoes a [[crisis of faith]].
 
Jeffreyssai carefully undergoes a [[crisis of faith]].
 
(alternate summary:)
 
 
(short story)
 
  
 
=====[http://lesswrong.com/lw/vv/logical_or_connectionist_ai/ Logical or Connectionist AI?]=====
 
=====[http://lesswrong.com/lw/vv/logical_or_connectionist_ai/ Logical or Connectionist AI?]=====
  
 
(The correct answer being "Wrong!")
 
(The correct answer being "Wrong!")
 
=====[http://lesswrong.com/lw/vx/failure_by_analogy/ Failure By Analogy]=====
 
 
and [http://lesswrong.com/lw/vy/failure_by_affective_analogy/ Failure By Affective Analogy]
 
  
 
=====[http://lesswrong.com/lw/xy/the_fun_theory_sequence/ The Fun Theory Sequence]=====
 
=====[http://lesswrong.com/lw/xy/the_fun_theory_sequence/ The Fun Theory Sequence]=====
  
 
describes some of the many complex considerations that determine ''what sort of happiness'' we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.
 
describes some of the many complex considerations that determine ''what sort of happiness'' we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.
 
=====[http://lesswrong.com/lw/yh/cynicism_in_evpsych_and_econ/ Cynicism in Ev-Psych (and Econ?)]=====
 
 
and [http://lesswrong.com/lw/yi/the_evolutionarycognitive_boundary/ The Evolutionary-Cognitive Boundary]
 
 
=====[http://lesswrong.com/lw/1/about_less_wrong/ About Less Wrong]=====
 
 
, he mentioned two topics that deserved a moratorium until the end of April 2009. These are The Singularity and [[Artificial General Intelligence]]. In discussions, these are often referred to as "The Topics that Must Not be Named". Occasionally you'll also see "The Institute that Must Not be Named." This is presumably [http://intelligence.org SIAI] (The Singularity Institute for Artificial Intelligence)
 
 
=====[http://lesswrong.com/lw/2i/epistemic_viciousness/ Epistemic Viciousness]=====
 
 
, and do attempts at [[rationality]] training run into the same problem?
 
 
=====[http://lesswrong.com/lw/3b/never_leave_your_room/ Never Leave Your Room]=====
 
 
by Yvain, and [http://lesswrong.com/lw/4e/cached_selves/ Cached Selves] by Salamon and Rayhawk.
 
 
=====[http://lesswrong.com/lw/4e/cached_selves/ Cached Selves]=====
 
 
by Salamon and Rayhawk.
 
 
=====[http://lesswrong.com/lw/90/newcombs_problem_standard_positions/ Newcomb's Problem standard positions]=====
 
 
by  [[Eliezer Yudkowsky]]
 
 
=====[http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_great/ Extreme Rationality: It's Not That Great]=====
 
 
" which essentially answered "Not on the present state of the Art"
 
 
=====[http://lesswrong.com/lw/a6/the_unfinished_mystery_of_the_shangrila_diet/ The Unfinished Mystery of the Shangri-La Diet]=====
 
 
and [http://lesswrong.com/lw/ab/akrasia_and_shangrila/ Akrasia and Shangri-La]
 
 
=====[http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ Towards a New Decision Theory]=====
 
 
by Wei Dai.
 
 
=====[http://lesswrong.com/lw/19m/privileging_the_hypothesis/ Privileging the Hypothesis]=====
 
 
(and its [[Privileging the hypothesis | requisites]], like [[Locating the hypothesis]])
 

Latest revision as of 09:26, 25 October 2009


The Martial Art of Rationality

Basic introduction of the metaphor and some of its consequences.

Why truth? And...

You have an instrumental motive to care about the truth of your beliefs about anything you care about.

(alternate summary:)

You have an instrumental motive to care about the truth of your beliefs about anything you care about.

The Third Alternative

on not skipping the step of looking for additional alternatives

Correspondence Bias

also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.

Hindsight bias

describes the tendency to seem much more likely in hindsight than could have been predicted beforehand.

Positive Bias: Look Into the Dark

is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.

We Change Our Minds Less Often Than We Think

we all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action. Once you think you believe something, the chances are good that you already do, for better or worse.

Do We Believe Everything We're Told?

Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.

Illusion of Transparency: Why No One Understands You

Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.

Evolutions Are Stupid (But Work Anyway)

evolution, while not simple, is sufficiently simpler than organic brains that we can describe mathematically how slow and stupid it is.

The Tragedy of Group Selectionism

a tale of how some pre-1960s biologists were led astray by expecting evolution to do smart, nice things like they would do themselves.

Thou Art Godshatter

describes the evolutionary psychology behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to maximize genetic fitness.

Evolving to Extinction

it is a common misconception that evolution works for the good of a species, but actually evolution only cares about the inclusive fitness of genes relative to each other, and so it is quite possible for a species to evolve to extinction.

Not for the Sake of Happiness (Alone)

tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.

Lost Purposes

on noticing when you're still doing something that has become disconnected from its original purpose

The Affect Heuristic

Positive and negative emotional impressions exert a greater effect on many decisions than does rational analysis.

Evaluability (And Cheap Holiday Shopping)

It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.

Unbounded Scales, Huge Jury Awards, & Futurism

Without a metric for comparison, estimates of, e.g., what sorts of punative damages should be awarded, or when some future advance will happen, vary widely simply due to the lack of a scale.

Fake Utility Functions

describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

Dissolving the Question

this is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.

Explaining vs. Explaining Away

elementary reductionism.

Initiation Ceremony

Brennan is inducted into the Conspiracy

The Failures of Eld Science

Jeffreyssai explains that rationalists should be fast.

(alternate summary:)

(prerequisite: Quantum Physics)

(alternate summary:)

Fictional portrayal of a potential rationality dojo.

Class Project

The students are given one month to develop a theory of quantum gravity.

Timeless Control

(from The Quantum Physics Sequence)

Anthropomorphic Optimism

you shouldn't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.

The Ritual

Jeffreyssai carefully undergoes a crisis of faith.

Logical or Connectionist AI?

(The correct answer being "Wrong!")

The Fun Theory Sequence

describes some of the many complex considerations that determine what sort of happiness we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.