From Lesswrongwiki
Revision as of 01:41, 28 April 2011 by Curiousepic (talk | contribs)
Jump to: navigation, search

This is a short list of common terms and phrases used on LessWrong.

See also:

Agree Denotationally, But Object Connotatively
Discussion in When Truth Isn't Enough
As Far As I Can Tell
Affective death spiral
When positive attributions combine with the halo effect in a positive feedback loop.
Artificial general intelligence
Bad rules of thinking itself, constructed to protect a false belief.
A codeword cognitive scientists use for "rational". Key concept in understanding LessWrong.
A fictional secret society of Bayesians.
Belief update
What you do to your beliefs, opinions and cognitive structure when new evidence comes along.
Blues, Greens
Roman Empire chariot-racing teams that became part of politics. Used in place of real party names.
See Color politics
Crisis of faith
What to have when you may have been quite wrong for a long time.
Dark arts
Rhetorical techniques crafted to exploit human cognitive biases. Considered bad behaviour even if the belief you want to communicate is good.
Edited To Add
Friendly AI
Fully general counterargument
An argument which can be used to discount any conclusion the arguer does not like.
The desired but less useful counterpart to utilons. They make you feel you're altruistic and socially contributing.
A unit philosophers use to quantify pleasure. (Note: no actual quantifying is done.)
Hollywood rationality
What Spock does, not what actual rationalists do.
Intelligence augmentation
I Agree With Your Conclusion
Generally used when nitpicking, to make it clear that the nitpicks are not meant to represent actual disagreement. Discussed in Support That Sounds Like Dissent.
I don't know
Something that can't be entirely true if you can even formulate a question.
It Seems To Me
LCPW, Least convenient possible world
To assume that all the specific details will align with the idea against which you are arguing, and that you can't evade a philosophical question by nitpicking details.
Logical rudeness
A response to criticism which insulates the responder from having to address the criticism directly, without appearing to be conventional rudeness.
Less Wrong
A topic that reliably produces biased discussions, e.g. politics or Pick-Up Artists.
Motivated cognition
Reasoning that starts with its conclusion and works backwards.
Overcoming Bias
A hypothetical superintelligent being, canonically found in Newcomb's problem.
Paperclip maximizer
An AI that has been created to maximise the number of paperclips in the universe. A form of UFAI.
Paranoid debating
A group estimation game in which one player, unknown to the others, tries to subvert the group estimate.
Perceptual control theory
Prisoner's dilemma
Philosophical zombie or P-Zombie
A creature which looks and behaves indistinguishably from a human down to the atomic level, but is not conscious. The concept is not well-respected on LessWrong.
What you update from in Bayesian calculations. In practical terms, everything you think you know now.
Quality-adjusted life year
Rationalist taboo
A technique of reducing what you are talking about: taboo the use of a given word or its synonyms. Particularly useful in arguments over definitions.
Semantic stopsign
A term that looks like an explanation but, on closer examination, doesn't actually explain anything.
Shut up and multiply
How to do a utility calculation without scope insensitivity.
Discussing an event in a manner that implies it is caused by its future consequences.
Too long; didn't read.
Polite use: one-line summary at top of your long article. Impolite use: dismissive response to another's long piece of writing or unparagraphed slab of text.
Topic that must not be named
When LessWrong was started, Eliezer put a temporary moratorium on discussion of the Singularity or AI. You will see this used in old discussions to allude to these topics.
Unfriendly AI
Tsuyoku naritai
Japanese: "I want to become stronger."
Units of utility. Contrast "Fuzzies".
Your Mileage May Vary
see Other-optimizing