Difference between revisions of "Jargon"

From Lesswrongwiki
Jump to: navigation, search
(Add just "taboo" which is more likely to be seen than "rationalist taboo"...)
(AINA isn't used frequently enough to be listed here)
Line 49: Line 49:
:Intelligence augmentation
:Intelligence augmentation
:"I am not a ..." used when speaking on a topic of which the writer is not an expert, to denote that they are not speaking from authority.
:I Agree With Your Conclusion
:I Agree With Your Conclusion

Revision as of 10:01, 11 August 2011

This is a short list of common terms and phrases used on LessWrong.

See also:

Agree Denotationally, But Object Connotatively
Discussion in When Truth Isn't Enough
As Far As I Can Tell
Affective death spiral
When positive attributions combine with the halo effect in a positive feedback loop.
Artificial general intelligence
Bad rules for thinking itself, capable of protecting false beliefs.
A fictional secret society of Bayesians.
Belief update
What you do to your beliefs, opinions and cognitive structure when new evidence comes along.
Blues, Greens
Roman Empire chariot-racing teams that became part of politics. Used in place of real party names.
See Color politics
Coherent Extrapolated Volition
Consequentialism is a moral theory in which your choices are based on the consequences of your actions... kinda. It's covered in more depth here: [Consequentialism faq]
Crisis of faith
What to have when you may have been quite wrong for a long time.
Dark arts
Rhetorical techniques crafted to exploit human cognitive biases. Considered bad behaviour even if the belief you want to communicate is good.
Edited To Add
Friendly AI
Fully general counterargument
An argument which can be used to discount any conclusion the arguer does not like.
The desired but less useful counterpart to utilons. They make you feel you're altruistic and socially contributing.
A unit philosophers use to quantify pleasure. (Note: no actual quantifying is done.)
Hollywood rationality
What Spock does, not what actual rationalists do.
Intelligence augmentation
I Agree With Your Conclusion
Generally used when nitpicking, to make it clear that the nitpicks are not meant to represent actual disagreement. Discussed in Support That Sounds Like Dissent.
I don't know
Something that can't be entirely true if you can even formulate a question.
It Seems To Me
Kolmogorov complexity
Given a string, the length of the shortest possible program that prints it.
LCPW, Least convenient possible world
To assume that all the specific details will align with the idea against which you are arguing, and that you can't evade a philosophical question by nitpicking details.
Logical rudeness
A response to criticism which insulates the responder from having to address the criticism directly, without appearing to be conventional rudeness.
Less Wrong
A topic that reliably produces biased discussions, e.g. politics or Pick-Up Artists.
Motivated cognition
Reasoning that starts with its conclusion and works backwards.
Overcoming Bias
One of the choices for Newcomb's problem
A hypothetical superintelligent being, canonically found in Newcomb's problem.
Paperclip maximizer
An AI that has been created to maximise the number of paperclips in the universe. A form of UFAI.
Paranoid debating
A group estimation game in which one player, unknown to the others, tries to subvert the group estimate.
Perceptual control theory
Prisoner's dilemma
Philosophical zombie or P-Zombie
A creature which looks and behaves indistinguishably from a human down to the atomic level, but is not conscious. The concept is not well-respected on LessWrong.
What you update from in Bayesian calculations. In practical terms, everything you think you know now.
Quality-adjusted life year
Rationalist taboo
A technique of reducing what you are talking about: taboo the use of a given word or its synonyms. Particularly useful in arguments over definitions.
Semantic stopsign
A term that looks like an explanation but, on closer examination, doesn't actually explain anything.
Shut up and multiply
How to do a utility calculation without scope insensitivity.
Solomonoff induction
A formalised version of Occam's razor based on Kolmogorov complexity.
Taboo the word ...
This is the technique of Rationalist taboo - whereby you taboo the use of a given word or its synonyms. Particularly useful in arguments over definitions.
Discussing an event in a manner that implies it is caused by its future consequences.
Too long; didn't read.
Polite use: one-line summary at top of your long article. Impolite use: dismissive response to another's long piece of writing or unparagraphed slab of text.
Topic that must not be named
When LessWrong was started, Eliezer put a temporary moratorium on discussion of the Singularity or AI. You will see this used in old discussions to allude to these topics.
One of the choices for Newcomb's problem
Tsuyoku naritai
Japanese: "I want to become stronger."
Unfriendly AI
Utility function
A utility function assigns numerical values ("utilities") to outcomes, in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities.
Units of utility. Contrast "Fuzzies".
See Belief update.
Your Mileage May Vary
See Other-optimizing