Difference between revisions of "Jargon"
From Lesswrongwiki
(AINA isn't used frequently enough to be listed here) |
(added ontology/deontology from wikipedia summaries) |
||
Line 35: | Line 35: | ||
;[[Dark arts]] | ;[[Dark arts]] | ||
:Rhetorical techniques crafted to exploit human cognitive biases. Considered bad behaviour even if the belief you want to communicate is good. | :Rhetorical techniques crafted to exploit human cognitive biases. Considered bad behaviour even if the belief you want to communicate is good. | ||
+ | ;Deontology/ deontological ethics | ||
+ | :Deontological ethics (from Greek deon, "obligation, duty"; and -logia) is an approach to ethics that judges the morality of an action based on the action's adherence to a rule or rules. See wikipedia article on [[http://en.wikipedia.org/wiki/Deontological_ethics Deontological ethics]] for more. | ||
;ETA | ;ETA | ||
:Edited To Add | :Edited To Add | ||
Line 72: | Line 74: | ||
;One-box | ;One-box | ||
:One of the choices for [[Newcomb's problem]] | :One of the choices for [[Newcomb's problem]] | ||
+ | ;Ontology/ontological | ||
+ | :The philosophical study of the nature of being, existence or reality, deals with questions concerning what entities exist or can be said to exist, and how such entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences. See also wikipedia's [[http://en.wikipedia.org/wiki/Ontological_argument Ontological argument]] for an example of (ab)using ontology to try and prove the existence of God. | ||
;[[Omega]] | ;[[Omega]] | ||
:A hypothetical superintelligent being, canonically found in [[Newcomb's problem]]. | :A hypothetical superintelligent being, canonically found in [[Newcomb's problem]]. |
Revision as of 02:01, 30 August 2011
This is a short list of common terms and phrases used on LessWrong.
See also:
- ADBOC
- Agree Denotationally, But Object Connotatively
- Discussion in When Truth Isn't Enough
- AFAICT
- As Far As I Can Tell
- Affective death spiral
- When positive attributions combine with the halo effect in a positive feedback loop.
- AGI
- Artificial general intelligence
- Anti-epistemology
- Bad rules for thinking itself, capable of protecting false beliefs.
- Beisutsukai
- A fictional secret society of Bayesians.
- Belief update
- What you do to your beliefs, opinions and cognitive structure when new evidence comes along.
- Blues, Greens
- Roman Empire chariot-racing teams that became part of politics. Used in place of real party names.
- See Color politics
- CEV
- Coherent Extrapolated Volition
- Consequentialist/ism
- Consequentialism is a moral theory in which your choices are based on the consequences of your actions... kinda. It's covered in more depth here: [Consequentialism faq]
- Crisis of faith
- What to have when you may have been quite wrong for a long time.
- Dark arts
- Rhetorical techniques crafted to exploit human cognitive biases. Considered bad behaviour even if the belief you want to communicate is good.
- Deontology/ deontological ethics
- Deontological ethics (from Greek deon, "obligation, duty"; and -logia) is an approach to ethics that judges the morality of an action based on the action's adherence to a rule or rules. See wikipedia article on [Deontological ethics] for more.
- ETA
- Edited To Add
- FAI
- Friendly AI
- Fully general counterargument
- An argument which can be used to discount any conclusion the arguer does not like.
- Fuzzies
- The desired but less useful counterpart to utilons. They make you feel you're altruistic and socially contributing.
- Hedon
- A unit philosophers use to quantify pleasure. (Note: no actual quantifying is done.)
- Hollywood rationality
- What Spock does, not what actual rationalists do.
- IA
- Intelligence augmentation
- IAWYC
- I Agree With Your Conclusion
- Generally used when nitpicking, to make it clear that the nitpicks are not meant to represent actual disagreement. Discussed in Support That Sounds Like Dissent.
- I don't know
- Something that can't be entirely true if you can even formulate a question.
- ISTM
- It Seems To Me
- Kolmogorov complexity
- Given a string, the length of the shortest possible program that prints it.
- LCPW, Least convenient possible world
- To assume that all the specific details will align with the idea against which you are arguing, and that you can't evade a philosophical question by nitpicking details.
- Logical rudeness
- A response to criticism which insulates the responder from having to address the criticism directly, without appearing to be conventional rudeness.
- LW
- Less Wrong
- Mind-killer
- A topic that reliably produces biased discussions, e.g. politics or Pick-Up Artists.
- Motivated cognition
- Reasoning that starts with its conclusion and works backwards.
- OB
- Overcoming Bias
- One-box
- One of the choices for Newcomb's problem
- Ontology/ontological
- The philosophical study of the nature of being, existence or reality, deals with questions concerning what entities exist or can be said to exist, and how such entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences. See also wikipedia's [Ontological argument] for an example of (ab)using ontology to try and prove the existence of God.
- Omega
- A hypothetical superintelligent being, canonically found in Newcomb's problem.
- Paperclip maximizer
- An AI that has been created to maximise the number of paperclips in the universe. A form of UFAI.
- Paranoid debating
- A group estimation game in which one player, unknown to the others, tries to subvert the group estimate.
- PCT
- Perceptual control theory
- PD
- Prisoner's dilemma
- Philosophical zombie or P-Zombie
- A creature which looks and behaves indistinguishably from a human down to the atomic level, but is not conscious. The concept is not well-respected on LessWrong.
- Priors
- What you update from in Bayesian calculations. In practical terms, everything you think you know now.
- QALY
- Quality-adjusted life year
- Rationalist taboo
- A technique of reducing what you are talking about: taboo the use of a given word or its synonyms. Particularly useful in arguments over definitions.
- Semantic stopsign
- A term that looks like an explanation but, on closer examination, doesn't actually explain anything.
- Shut up and multiply
- How to do a utility calculation without scope insensitivity.
- Solomonoff induction
- A formalised version of Occam's razor based on Kolmogorov complexity.
- Taboo the word ...
- This is the technique of Rationalist taboo - whereby you taboo the use of a given word or its synonyms. Particularly useful in arguments over definitions.
- Teleology
- Discussing an event in a manner that implies it is caused by its future consequences.
- tl;dr
- Too long; didn't read.
- Polite use: one-line summary at top of your long article. Impolite use: dismissive response to another's long piece of writing or unparagraphed slab of text.
- Topic that must not be named
- When LessWrong was started, Eliezer put a temporary moratorium on discussion of the Singularity or AI. You will see this used in old discussions to allude to these topics.
- Two-box
- One of the choices for Newcomb's problem
- Tsuyoku naritai
- Japanese: "I want to become stronger."
- UFAI
- Unfriendly AI
- Utility function
- A utility function assigns numerical values ("utilities") to outcomes, in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities.
- Utilons
- Units of utility. Contrast "Fuzzies".
- Update
- See Belief update.
- YMMV
- Your Mileage May Vary
- See Other-optimizing