Difference between revisions of "Talk:RAZ Glossary"

From Lesswrongwiki
Jump to: navigation, search
(Entries removed from the main page for not seeming that enlightening)
Line 69: Line 69:
* '''vertex'''. ''See “graph.”''
* '''vertex'''. ''See “graph.”''
* '''winning'''. Yudkowsky’s term for getting what you want. The result of instrumental rationality.
* '''winning'''. Yudkowsky’s term for getting what you want. The result of instrumental rationality.
-[[User:RobbBB|RobbBB]] ([[User talk:RobbBB|talk]]) 10:34, 29 July 2020 (AEST)

Latest revision as of 11:34, 29 July 2020

Ideas for new entries

Other ideas for entries:

  • A: affect heuristic, akrasia, anthropics, artificial general intelligence, artificial intelligence, Aumann’s Agreement Theorem, availability heuristic, average utilitarianism, axon
  • B: backward chaining, base rate, base rate neglect, Bayes net, Bayes’s Theorem, Bayes-structure, Bayesian, Bayesian probability, Bayesian reasoner, Bayesian reasoning, Bayesian statistics, Bayesianism, bit, black swan, Blue, Born rule
  • C: calibration, capitalization, causal decision theory, cognitive bias, cognitive heuristic, cognitive science, collapse, complement, complex number, complexity, computation, conditional independence, confidence interval, configuration space, confirmation bias, consequentialism, conspiracy, cooperation, Copenhagen Interpretation, cosmological horizon, counterfactual, Cox’s Theorem, cryonics
  • D: D-separation, dan, de novo, decibel, decoherence, deduction, Deep Blue, defection, deontology, dimension, directed acyclic graph, doublethink, dukkha, Dutch Book, dynamic, dysrationalia
  • E: economies of scale, Egan’s Law, emergence, empiricism, epistemic rationality, epistemology, epsilon, eudaimonia, EURISKO, Everett branch, evidential decision theory, existential angst, existential risk, exponentiation
  • F: factorization, falsificationism, Fermi Paradox, frequentism, Friendly AI, fun theory, fungibility, futurism, fuzzy
  • G: g-factor, game theory, gene, General Relativity, gensym, Go, Green, grey goo, Gricean implication, group selection
  • H: hindsight bias, holodeck, humility, hyper-real number
  • I: induction, inductive bias, inefficient market, information theory, integer, instrumental, instrumental rationality, intelligence, intelligence explosion, intentionality, intuition pump, intuitionism, IRC, isomorphism
  • J: joint probability
  • K: koan, Kolmogorov complexity
  • L: Laplace’s Rule of Succession, Less Wrong, likelihood ratio, LISP token, Litany of Gendlin, Litany of Tarski, log odds, logarithm, logic, lookup table
  • M: machine code, Many Worlds Interpretation, marginal efficiency, marginal probability, marginal returns, market economy, Mind Projection Fallacy, modesty
  • N: necessary condition, neural networks
  • O: optimization, order of magnitude
  • P: package deal fallacy, particle, Peano arithmetic, prediction market, probability mass, p-value
  • R: real number, renormalization, reversible computer
  • S: scalar factor, science, second-order, self-handicapping, solipsism, sufficient condition, superposition

- RobbBB (talk) 12:27, 10 March 2015 (AEDT)

Entries removed from the main page for not seeming that enlightening

  • ad hominem. A verbal attack on the person making an argument, where a direct criticism of the argument is possible and would be more relevant. The term is reserved for cases where talking about the person amounts to changing the topic. If your character is the topic from the outset (e.g., during a job interview), then it isn't an ad hominem fallacy to cite evidence showing that you're a lousy worker.
  • algorithm. A specific procedure for computing some function. A mathematical object consisting of a finite, well-defined sequence of steps that concludes with some output determined by its initial input. Multiple physical systems can simultaneously instantiate the same algorithm.
  • anthropomorphism. The tendency to assign human qualities to non-human phenomena.
  • ASCII. The American Standard Code for Information Exchange. A very simple system for encoding 128 ordinary English letters, numbers, and punctuation.
  • bucket. See “pebble and bucket.”
  • econblog. Economics blog.
  • evolution. (a) In biology, change in a population’s heritable features. (b) In other fields, change of any sort.
  • formalism. A specific way of logically or mathematically representing something.
  • function. A relation between inputs and outputs such that every input has exactly one output. A mapping between two sets in which every element in the first set is assigned a single specific element from the second.
  • hat tip. A grateful acknowledgment of someone who brought information to one's attention.
  • idiot god. One of Yudkowsky's pet names for natural selection.
  • iff. If, and only if.
  • Lamarckism. The 19th-century pre-Darwinian hypothesis that populations evolve via the hereditary transmission of the traits practiced and cultivated by the previous generation.
  • Machine Intelligence Research Institute. A non-profit organization that works on mathematical research related to Friendly AI. Yudkowsky co-founded MIRI in 2000, and is the senior researcher there.
  • Maxwell’s equations. In classical physics, a set of differential equations that model the behavior of electromagnetic fields.
  • meme. Richard Dawkins’ term for a thought that can be spread through social networks.
  • minimax. A decision rule for turn-based zero-sum two-player games, where one picks moves that minimize one's opponent’s chance of winning when their moves maximize their chance of winning. This rule is intended to perform well even in worst-case scenarios where one’s opponent makes excellent decisions.
  • MIRI. See “Machine Intelligence Research Institute.”
  • money pump. A person who is irrationally willing to accept sequences of trades that add up to an expected loss.
  • natural selection. The process by which heritable biological traits change in frequency due to their effect on how much their bearers reproduce.
  • Neutral Point of View. A policy used by the online encyclopedia Wikipedia to instruct users on how they should edit the site’s contents. Following this policy means reporting on the different positions in controversies, while refraining from weighing in on which position is correct.
  • normality. (a) What’s commonplace. (b) What’s expected, prosaic, and unsurprising. Categorizing things as “normal” or weird” can cause one to conflate these two definitions, as though something must be inherently extraordinary or unusual just because one finds it surprising or difficult to predict. This is an example of confusing a feature of mental maps with a feature of the territory.
  • objective. (a) Remaining real or true regardless of what one’s opinions or other mental states are. (b) Conforming to generally applicable moral or epistemic norms (e.g., fairness or truth) rather than to one’s biases or idiosyncrasies. (c) Perceived or acted on by an agent. (d) A goal.
  • Objectivism. A philosophy and social movement invented by Ayn Rand, known for promoting self-interest and laissez-faire capitalism as “rational.”
  • OLPC. See “One Laptop Per Child.”
  • One Laptop Per Child. A program to distribute cheap laptops to poor children.
  • OpenCog. An open-source AGI project based in large part on work by Ben Goertzel. MIRI provided seed funding to OpenCog in 2008, but subsequently redirected its research efforts elsewhere.
  • oracle. See “halting oracle.”
  • Overcoming Bias. The blog where Yudkowsky originally wrote most of the content of Rationality: From AI to Zombies. It can be found at [www.overcomingbias.com], where it now functions as the personal blog of Yudkowsky’s co-blogger, Robin Hanson. Most of Yudkowsky’s writing is now hosted on the community blog Less Wrong.
  • pebble and bucket. An example of a system for mapping reality, analogous to memory or belief. One picks some variable in the world, and places pebbles in the bucket when the variable’s value (or one’s evidence for its value) changes. The point of this illustrative example is that the mechanism is very simple, yet achieves many of the same goals as properties that see heated philosophical debate, such as perception, truth, knowledge, meaning, and reference.
  • photon. An elementary particle of light.
  • proposition. Something that is either true or false. Commands, requests, questions, cheers, and excessively vague or ambiguous assertions are not propositions in this strict sense. Some philosophers identify propositions with sets of possible worlds -- that is, they think of propositions like “snow is white” not as particular patterns of ink in books, but rather as the thing held in common by all logically consistent scenarios featuring white snow. This is one way of abstracting away from how sentences are worded, what language they are in, etc., and merely discussing what makes the sentences true or false. (In mathematics, the word “proposition” has separately been used to refer to theorems -- e.g., “Euclid’s First Proposition.”)
  • quantum mechanics. The branch of physics that studies subatomic phenomena and their nonclassical implications for larger structures; also, the mathematical formalisms used by physicists to predict such phenomena. Although the predictive value of such formalisms is extraordinarily well-established experimentally, physicists continue to debate how to incorporate gravitation into quantum mechanics, whether there are more fundamental patterns underlying quantum phenomena, and why the formalisms require a “Born rule” to relate the deterministic evolution of the wavefunction under Schrödinger’s equation to observed experimental outcomes. Related to the last question is a controversy in philosophy of physics over the physical significance of quantum-mechanical concepts like “wavefunction,” e.g., whether this mathematical structure in some sense exists objectively, or whether it is merely a convenience for calculation.
  • recursion. A sequence of similar actions that each build on the result of the previous action.
  • separate magisteria. See “magisterium.”
  • sequences. Yudkowsky’s name for short series of thematically linked blog posts or essays.
  • set theory. The study of relationships between abstract collections of objects, with a focus on collections of other collections. A branch of mathematical logic frequently used as a foundation for other mathematical fields.
  • Singularity Summit. An annual conference held by MIRI from 2006 to 2012. Purchased by Singularity University in 2013.
  • strawman. An indefensible claim that is wrongly attributed to someone whose actual position is more plausible.
  • subjective. (a) Conscious, experiential. (b) Dependent on the particular distinguishing features (e.g., mental states) of agents. (c) Playing favorites, disregarding others’ knowledge or preferences, or otherwise violating some norm as a result of personal biases. Importantly, something can be subjective in sense (a) or (b) without being subjective in sense (c); e.g., one’s ice cream preferences and childhood memories are “subjective” in a perfectly healthy sense.
  • subjectivism. See “Berkeleian idealism.”
  • territory. See “map and territory.”
  • theorem. A statement that has been mathematically or logically proven.
  • Type-A materialism. David Chalmers’s term for the view that the world is purely physical, and that there is no need to try to explain the relationship between the physical facts and the facts of first-person conscious experience. Type-A materialists deny that there is even an apparent mystery about why philosophical zombies seem conceivable. Other varieties of materialist accept that this is a mystery, but expect it to be solved eventually, or deny that the lack of a solution undermines physicalism.
  • utility maximizer. An agent that always picks actions with better outcomes over ones with worse outcomes (relative to its utility function). An expected utility maximizer is more realistic, given that real-world agents must deal with ignorance and uncertainty: it picks the actions that are likeliest to maximize its utility, given the available evidence. An expected utility maximizer’s decisions would sometimes be suboptimal in hindsight, or from an omniscient perspective; but they won’t be foreseeably inferior to any alternative decision, given the agent’s available evidence. Humans can sometimes be usefully modeled as expected utility maximizers with a consistent utility function, but this is at best an approximation, since humans are not perfectly rational.
  • vertex. See “graph.”
  • winning. Yudkowsky’s term for getting what you want. The result of instrumental rationality.

-RobbBB (talk) 10:34, 29 July 2020 (AEST)