https://wiki.lesswrong.com/api.php?action=feedcontributions&user=Ete&feedformat=atomLesswrongwiki - User contributions [en]2021-03-09T00:56:45ZUser contributionsMediaWiki 1.31.10https://wiki.lesswrong.com/index.php?title=Superintelligence&diff=15620Superintelligence2016-10-13T19:03:37Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{wikilink}}<br />
{{arbitallink|https://arbital.com/p/superintelligent/|Superintelligent}}<br />
A '''Superintelligence''' is a being with superhuman intelligence, and a focus of the [[Machine Intelligence Research Institute]]'s research. Specifically, Nick Bostrom (1997) defined it as<br />
<br />
<blockquote> "An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."</blockquote><br />
<br />
The [[Machine Intelligence Research Institute]] is dedicated to ensuring humanity's safety and prosperity by preparing for the development of an [[Artificial General Intelligence]] with superintelligence. Given its intelligence, it is likely to be [[AI boxing|incapable of being controlled]] by humanity. It is important to prepare early for the development of [[friendly artificial intelligence]], as there may be an [[AI arms race]]. A strong superintelligence is a term describing a superintelligence which is not designed with the same architecture as the human brain.<br />
<br />
An [[Artificial General Intelligence]] will have a number of advantages aiding it in becoming a superintelligence. It can improve the hardware it runs on and obtain better hardware. It will be capable of directly editing its own code. Depending on how easy its code is to modify, it might carry out software improvements that [[recursive self-improvement|spark further improvements]]. Where a task can be accomplished in a repetitive way, a module preforming the task far more efficiently might be developed. Its motivations and preferences can be edited to be more consistent with each other. It will have an indefinite life span, be capable of reproducing, and transfer knowledge, skills, and code among its copies as well as cooperating and communicating with them better than humans do with each other.<br />
<br />
The development of superintelligence from humans is another possibility, sometimes termed a weak superintelligence. It may come in the form of [[whole brain emulation]], where a human brain is scanned and simulated on a computer. Many of the advantages a AGI has in developing superintelligence apply here as well. The development of [[Brain-computer interfaces]] may also lead to the creation of superintelligence. Biological enhancements such as genetic engineering and the use of nootropics could lead to superintelligence as well. <br />
<br />
==Blog Posts==<br />
<br />
*[http://www.acceleratingfuture.com/articles/superintelligencehowsoon.htm Superintelligence] by Michael Anissimov<br />
<br />
==External Links==<br />
<br />
*[http://www.nickbostrom.com/superintelligence.html How long before Superintelligence?] by Nick Bostrom<br />
*[http://profhugodegaris.files.wordpress.com/2011/04/nocyborgsbghugo.pdf A discussion between Hugo de Garis and Ben Goertzel on superintelligence]<br />
*[http://www.xuenay.net/Papers/DigitalAdvantages.pdf Advantages of Artificial Intelligences, Uploads, And Digital Minds] by Kaj Sotala<br />
<br />
==See Also==<br />
<br />
*[[Brain-computer interfaces]]<br />
*[[Singularity]]<br />
*[[Hard takeoff]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Solomonoff_induction&diff=15619Solomonoff induction2016-10-13T19:01:58Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/solomonoff_induction/|Solomonoff induction}}{{wikilink}}<br />
[[Wikipedia:Ray Solomonoff|Ray Solomonoff]] defined an inference<br />
system that will learn to correctly predict any computable<br />
sequence with only the absolute minimum amount of data. This<br />
system, in a certain sense, is the perfect universal prediction<br />
algorithm. To summarize it very informally, '''Solomonoff induction'''<br />
works by: <br />
* Starting with all possible hypotheses (sequences) as represented by computer programs (that generate those sequences), weighted by their simplicity (2<sup>-'''n'''</sup>, where '''n''' is the program length);<br />
* Discarding those hypotheses that are inconsistent with the data.<br />
<br />
Weighting hypotheses by simplicity, the system automatically<br />
incorporates a form of [[Occam's razor]], which is why it has been<br />
playfully referred to as ''Solomonoff's lightsaber''.<br />
<br />
Solomonoff induction gets off the ground with a solution to the<br />
"problem of the priors". Suppose that you stand before a<br />
universal [http://www.scholarpedia.org/article/Algorithmic_complexity#Prefix_Turing_machine prefix Turing machine]<br />
<math>U</math>. You are interested in a certain finite output<br />
string <math>y_{0}</math>. In particular, you want to know the<br />
probability that <math>U</math> will produce the output<br />
<math>y_{0}</math> given a random input tape. This probability is<br />
the '''Solomonoff ''a priori'' probability''' of<br />
<math>y_{0}</math>.<br />
<br />
More precisely, suppose that a particular infinite input string<br />
<math>x_{0}</math> is about to be fed into<br />
<math>U</math>. However, you know nothing about<br />
<math>x_{0}</math> other than that each term of the string is either<br />
<math>0</math> or <math>1</math>. As far as your<br />
state of knowledge is concerned, the <math>i</math>th digit of<br />
<math>x_{0}</math> is as likely to be <math>0</math> as it is to<br />
be <math>1</math>, for all <math>i = 1, 2, \ldots</math>. You<br />
want to find the ''a priori'' probability <math>m(y_{0})</math> of<br />
the following proposition:<br />
<br />
(*) If <math>U</math> takes in <math>x_{0}</math><br />
as input, then <math>U</math> will produce output <math>y_{0}</math><br />
and then halt.<br />
<br />
Unfortunately, computing the exact value of <math>m(y_{0})</math> would require solving the halting problem, which is undecidable. Nonetheless, it is easy to derive an expression for <math>m(y_{0})</math>. If <math>U</math> halts on an infinite input string <math>x</math>, then <math>U</math> must read only a finite initial segment of <math>x</math>, after which <math>U</math> immediately halts. We call a finite string <math>p</math> a ''self-delimiting program'' if and only if there exists an infinite input string <math>x</math> beginning with <math>p</math> such that <math>U</math> halts on <math>x</math> immediately after reading to the end of <math>p</math>. The set <math>\mathcal{P}</math> of self-delimiting programs is the ''prefix code'' for <math>U</math>. It is the determination of the elements of <math>\mathcal{P}</math> that requires a solution to the halting problem.<br />
<br />
Given <math>p \in \mathcal{P}</math>, we write "<math>\operatorname{prog}(x_{0}) = p</math>" to express the proposition that <math>x_{0}</math> begins with <math>p</math>, and we write "<math>U(p) = y_{0}</math>" to express the proposition that <math>U</math> produces output <math>y_{0}</math>, and then halts, when fed any input beginning with <math>p</math>. Proposition (*) is then equivalent to the exclusive disjunction<br />
:<math>\bigvee_{p \in \mathcal{P} \colon U(p) = y_{0}} (\operatorname{prog}(x_{0}) = p).</math><br />
Since <math>x_{0}</math> was chosen at random from <math>\{0, 1\}^{\omega}</math>, we take the probability of <math>\operatorname{prog}(x_{0}) = p</math> to be <math>2^{-\ell(p)}</math>, where <math>\ell(p)</math> is the length of <math>p</math> as a bit string. Hence, the probability of (*) is<br />
:<math>m(y_{0}) := \sum_{p \in \mathcal{P} \colon U(p) = y_{0}} 2^{-\ell(p)}.</math><br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/ An Intuitive Explanation of Solomonoff Induction] by lukeprog and Alex_Altair<br />
<br />
==See also==<br />
<br />
*[[Kolmogorov complexity]]<br />
*[[AIXI]]<br />
*[[Occam's razor]]<br />
<br />
==References==<br />
<br />
*[http://www.scholarpedia.org/article/Algorithmic_probability Algorithmic probability] on Scholarpedia<br />
<br />
[[Category:Math]]<br />
[[Category:Decision theory]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Shannon_information&diff=15618Shannon information2016-10-13T18:58:02Z<p>Ete: removed dead link, added arbital link</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/shannon/|Shannon}}{{wikilink}}<br />
The Shannon entropy is a measure of the average information content one is missing when one does not know the value of the random variable.<br />
{{stub}}</div>Etehttps://wiki.lesswrong.com/index.php?title=Reflective_inconsistency&diff=15617Reflective inconsistency2016-10-13T18:54:54Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/reflective_stability/|Reflective stability}}'''Reflective inconsistency''' refers to a disagreement between your earlier self and your later self about what your ''earlier self'' should have done given only your ''earlier'' state of knowledge. It can informally be called ''regret'' or ''remorse''.<br />
<br />
More precisely, if you do Y, and later you think "Knowing only what I knew before doing Y should have deterred me from doing Y,", then you are reflectively inconsistent.<br />
<br />
==Examples== <br />
<br />
<br />
:Tuesday Self: <spends 5 dollars on a chance to win 6 dollars if a fair coin flip lands heads><br />
:<coin lands tails><br />
:Wednesday Self: "That wasn't worth the risk, and I should have known better."<br />
<br />
:Tuesday Self: <spends 5 dollars on a chance to win 6 dollars if a fair coin flip lands heads><br />
:<coin lands heads><br />
:Wednesday Self: "That was lucky, but it wasn't worth the risk and I should have known better.><br />
<br />
Non-example:<br />
<br />
:Tuesday Self: <spends 5 dollars on a chance to win 6 dollars if a fair coin flip lands heads><br />
:<coin lands tails><br />
:Wednesday Self: "I would have been better off not taking the bet, since the coin was going to land tails.><br />
<br />
==Related blog posts==<br />
<br />
* [http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/ Timeless Decision Theory and Meta-Circular Decision Theory]<br />
* [http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ Newcomb's Problem and Regret of Rationality]<br />
<br />
==See also==<br />
<br />
* [[Dynamic inconsistency]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Probability_theory&diff=15616Probability theory2016-10-13T17:59:27Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/explore/probability_theory/|Probability theory}}{{wikilink|Probability theory}}<br />
''' Probability theory ''' is a field of mathematics which studies random variables and processes. <br />
<br />
Although most of the basics and axioms of probability theory are uncontroversial, the interpretations, usages, and relative importance given to each result vary. There are two main interpretations of the concept of probability: the Bayesian (subjectivist, epistemic or evidential) and the frequentist (objectivist) <ref>It’s worth mentioning that inside the Bayesian interpretation, there are also views called objectivists.</ref>. The latter was the major and standard view from late 19th century until the late 20th century, when the Bayesian interpretation gained popularity in many fields of science and philosophy<ref> WOLPERT, R.L. (2004) “A conversation with James O. Berger”, Statistical science, 9, 205–218.</ref>.<br />
<br />
In the Bayesian interpretation, probability is seem as a belief about the credence of an event<ref> BERNARDO, J. M. & SMITH, A. F. M. (1994) “Bayesian Theory”. Wiley.</ref> <ref> JAYNES, E. T. (1996) ”Probability theory: The logic of science.'' Available from http://bayes.wustl.edu/etj/prob.html</ref>, whereas frequentist interpretations hold that probability is an objective property of a physical system, a propensity on some accounts<ref> POPPER, Karl.(1959) "The propensity interpretation of probability" The British Journal of the Philosophy of Science, Vol. 10, No. 37. (May, 1959), pp.25-42. Available at: http://www.hum.utah.edu/~mhaber/Documents/Course%20Readings/Popper_Propensity_BJPS1959LITE.pdf </ref>. An [[Wikipedia:Event (probability theory)|event]] with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as stating "Over 100 trials, we should observe event X approximately 60 times." Frequentists tend to base their view of probability in the [http://en.wikipedia.org/wiki/Law_of_large_numbers Law of Large Numbers], holding the expected probability of an individual event to be close to the average of the results obtained from a large number of trials. Thus, only repeatable events can be said to have probabilities.<br />
<br />
The Bayesian interpretation, on the other hand, allows one to assign probabilities to both beliefs and events that have never before happened. [[Bayes' theorem]] is used to update the probability of those beliefs when presented with new evidence. The Bayesian interpretation aims to model the correct way of thinking rationally when one is forced to deal with ignorance and uncertainty<ref> KORB, Kevin & NICHOLSON, Ann. (2010) "Bayesian Artificial Intelligence". CRC Press. p. 28</ref>. Many other fields, such as the study of [[Bias|cognitive biases]], use it as the golden reference for rationality.<br />
<br />
Many philosophers have argued for a complementary view of probability<ref>CARNAP, Rudolf. (1945). "The Two Concepts of Probability", Philosophy and Phenomenological Research 5, 513-532. </ref> <ref>JEFFREY, Richard C. (1965). "The Logic of Decision." New York: McGraw-Hill. </ref> <ref name="lewis"> LEWIS, David. (1980). “A Subjectivist's Guide to Objective Chance”. In JEFFREY, Richard C. (ed.), Studies in Inductive Logic and Probability. University of California Press. Available at: http://fitelson.org/probability/Lewis_asgtoc.pdf</ref> <ref name="cox">COX, R. T. (1946) “Probability, frequency and reasonable expectation,'' American Journal of Physics, vol. 14, no. 1, pp. 1-13</ref>, where both interpretations have their places and value. In the paper "A Subjectivist's Guide to Objective Chance", [[Wikipedia:David Kellogg Lewis|David Lewis]] has constructed a view where one can incorporate the frequentist interpretation (objective, chance) inside the Bayesian (subjective, credence) as special case. The frequentist probability is a case of a Bayesian probability conditionalized on truth or empirical evidence: "Chance is objectified subjective probability(...). Objectified credence is credence conditional on the truth." <ref name="lewis" /><br />
<br />
==Blog posts==<br />
*[http://lesswrong.com/lw/1to/what_is_bayesianism/ What is Bayesianism?]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/oj/probability_is_in_the_mind/ Probability is in the Mind]<br />
*[http://lesswrong.com/lw/sg/when_not_to_use_probabilities/ When (Not) To Use Probabilities]<br />
*[http://lesswrong.com/lw/7lk/an_objective_defense_of_bayesianism/ An objective defense of Bayesianism]<br />
*[http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequently_subjective/ Frequentist Statistics are Frequently Subjective]<br />
<br />
==Notes and References==<br />
{{Reflist|2}}<br />
<br />
==See also==<br />
<br />
*[[Priors]]<br />
*[[Bayes' theorem]]<br />
*[[Mind projection fallacy]]<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Ontological_crisis&diff=15615Ontological crisis2016-10-13T17:52:53Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/ontology_identification/|Ontology identification problem}}'''Ontological crisis''' is a term coined to describe the crisis an agent, human or not, goes through when its model - its ontology - of reality changes. <br />
<br />
In the human context, a clear example of an ontological crisis is a believer’s loss of faith in God. Their motivations and goals, coming from a very specific view of life suddenly become obsolete and maybe even nonsense in the face of this new configuration. The person will then experience a deep crisis and go through the psychological task of reconstructing its set of preferences according the new world view. <br />
<br />
When dealing with artificial agents, we, as their creators, are directly interested in their goals. That is, as Peter de Blanc puts it, when we create something we want it to be useful. As such we will have to define the artificial agent’s ontology – but since a fixed ontology severely limits its usefulness we have to think about adaptability. In his 2011 paper, the author then proposes a method to map old ontologies into new ones, thus adapting the agent’s utility functions and avoiding a crisis.<br />
<br />
This crisis, in the context of an [[AGI]], could in the worst case pose an [[existential risk]] when old preferences and goals continue to be used. Another possibility is that the AGI loses all ability to comprehend the world, and would pose no threat at all. If an AGI reevaluates its preferences after its ontological crisis, for example in the way mentioned above, very [[Unfriendly Artificial Intelligence| unfriendly]] behaviors could arise. Depending on the extent of the reevaluations, the AGI's changes may be detected and safely fixed. On the other hand, it could go undetected until they go wrong - which shows how it is of our interest to deeply explore ontological adaptation methods when designing AI. <br />
<br />
==Further Reading & References==<br />
*[http://arxiv.org/abs/1105.3821 Ontological Crises in Artificial Agents' Value Systems] by Peter de Blanc<br />
<br />
== Blog posts ==<br />
*[http://lesswrong.com/r/discussion/lw/827/ai_ontology_crises_an_informal_typology/ AI ontology crises: an informal typology] by Stuart Armstrong<br />
*[http://lesswrong.com/lw/xl/eutopia_is_scary/ Eutopia is Scary] by Eliezer Yudkowsky<br />
*[http://lesswrong.com/lw/fyb/ontological_crisis_in_humans/ Ontological Crisis in Humans] by Wei Dai<br />
<br />
==See also==<br />
*[[Evolution]]<br />
*[[Adaptation executers]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Ontological_crisis&diff=15614Ontological crisis2016-10-13T17:52:09Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/ontological_identification/|Ontological identification problem}}'''Ontological crisis''' is a term coined to describe the crisis an agent, human or not, goes through when its model - its ontology - of reality changes. <br />
<br />
In the human context, a clear example of an ontological crisis is a believer’s loss of faith in God. Their motivations and goals, coming from a very specific view of life suddenly become obsolete and maybe even nonsense in the face of this new configuration. The person will then experience a deep crisis and go through the psychological task of reconstructing its set of preferences according the new world view. <br />
<br />
When dealing with artificial agents, we, as their creators, are directly interested in their goals. That is, as Peter de Blanc puts it, when we create something we want it to be useful. As such we will have to define the artificial agent’s ontology – but since a fixed ontology severely limits its usefulness we have to think about adaptability. In his 2011 paper, the author then proposes a method to map old ontologies into new ones, thus adapting the agent’s utility functions and avoiding a crisis.<br />
<br />
This crisis, in the context of an [[AGI]], could in the worst case pose an [[existential risk]] when old preferences and goals continue to be used. Another possibility is that the AGI loses all ability to comprehend the world, and would pose no threat at all. If an AGI reevaluates its preferences after its ontological crisis, for example in the way mentioned above, very [[Unfriendly Artificial Intelligence| unfriendly]] behaviors could arise. Depending on the extent of the reevaluations, the AGI's changes may be detected and safely fixed. On the other hand, it could go undetected until they go wrong - which shows how it is of our interest to deeply explore ontological adaptation methods when designing AI. <br />
<br />
==Further Reading & References==<br />
*[http://arxiv.org/abs/1105.3821 Ontological Crises in Artificial Agents' Value Systems] by Peter de Blanc<br />
<br />
== Blog posts ==<br />
*[http://lesswrong.com/r/discussion/lw/827/ai_ontology_crises_an_informal_typology/ AI ontology crises: an informal typology] by Stuart Armstrong<br />
*[http://lesswrong.com/lw/xl/eutopia_is_scary/ Eutopia is Scary] by Eliezer Yudkowsky<br />
*[http://lesswrong.com/lw/fyb/ontological_crisis_in_humans/ Ontological Crisis in Humans] by Wei Dai<br />
<br />
==See also==<br />
*[[Evolution]]<br />
*[[Adaptation executers]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Orthogonality_thesis&diff=15613Orthogonality thesis2016-10-13T17:49:55Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/orthogonality/|Orthoganality thesis}}The '''orthogonality thesis''' states that an artificial intelligence can have any combination of intelligence level and goal. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal. The thesis was originally defined by [[Nick Bostrom]] in the paper "Superintelligent Will", (along with the [[instrumental convergence thesis]]). For his purposes Bostrom defines intelligence to be [[instrumental rationality]].<br />
<br />
==Defense of the thesis==<br />
It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,<br />
<br />
{{Quote|Thus to deny the Orthogonality thesis is to assert that there is a goal system G, such that, among other things:<br />
<br />
#There cannot exist any efficient real-world algorithm with goal G.<br />
#If a being with arbitrarily high resources, intelligence, time and goal G, were to try design an efficient real-world algorithm with the same goal, it must fail.<br />
#If a human society were highly motivated to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail.<br />
#If a high-resource human society were highly motivated to achieve the goals of G, then it could not do so (here the human society is seen as the algorithm).<br />
#Same as above, for any hypothetical alien societies.<br />
#There cannot exist ''any'' pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G.<br />
#There cannot exist any evolutionary or environmental pressures that would evolving highly efficient real world intelligences to follow goal G.}}<br />
<br />
One reason many researchers assume superintelligences to converge to the same goals may be because [[Human universal|most humans]] have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as [[AIXI]] and [[Gödel machine|Gödel machines]], the thesis is known to be true. Furthermore, if the thesis was false, then [[Oracle AI|Oracle AIs]] would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.<br />
<br />
==Pathological Cases==<br />
There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit degree of intelligence of the AI.<br />
<br />
==Blog posts==<br />
*[http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/ General purpose intelligence: arguing the Orthogonality thesis]<br />
<br />
==See also==<br />
*[[Basic AI drives]]<br />
<br />
==External links==<br />
*Definition of the orthogonality thesis from Bostrom's [http://www.nickbostrom.com/superintelligentwill.pdf Superintelligent Will]<br />
*[http://philosophicaldisquisitions.blogspot.com/2012/04/bostrom-on-superintelligence-and.html Critique] of the thesis by John Danaher</div>Etehttps://wiki.lesswrong.com/index.php?title=Nonperson_predicate&diff=15612Nonperson predicate2016-10-13T17:24:58Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/nonperson_predicate/|Nonperson predicate}}A '''Nonperson Predicate''' is a theorized test which can definitely distinguish computational structures which are ''not'' people; i.e., a predicate which returns 1 for all people, and returns 0 or 1 for nonpeople; thus if it returns 1, the structure may or may not be a person, but if it returns 0, the structure is definitely not a person. In other words, any time at least one trusted nonperson predicate returns 0, we know we can run that program without creating a person. (The impossibility of perfectly distinguishing people and nonpeople is a trivial consequence of the halting problem.)<br />
<br />
The need for such a test arises from the possibility that when an [[Artificial General Intelligence]] predicts a person's actions, it may develop a model of them so complete that the model itself qualifies as a person (though not necessarily the ''same'' person). As the AGI investigates possibilities, these simulated people might be subjected to a large number of unpleasant situations. With a trusted nonperson predicate, either the AGI's designers or the AGI itself could ensure that no actual people are created.<br />
<br />
Any practical implementation would likely consist of a large number of nonperson predicates of increasing complexity. For most nonpersons, a predicate will quickly return that it is not a person and conclude the test. Although any number of the predicates may be used before the test claims that something is not a person, it is crucial that any predicate in the test never claims that a person isn't a person. Unclassifiable cases being in-principle unavoidable, it is preferable that the AGI errs on the side of considering possible-persons as persons.<br />
<br />
== See Also ==<br />
* [[Computational hazard]]<br />
* [[Philosophical zombie]]<br />
<br />
== Blog Posts ==<br />
* [http://lesswrong.com/lw/x4/nonperson_predicates/ Nonperson Predicates] by Eliezer Yudkowsky<br />
* [http://lesswrong.com/lw/d2f/computation_hazards/ Computational Hazards] by Alex Altair</div>Etehttps://wiki.lesswrong.com/index.php?title=Newcomb%27s_problem&diff=15611Newcomb's problem2016-10-13T17:20:57Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/explore/5pt/|Newcombelike decision problems}}{{wikilink|Newcomb's paradox}}<br />
In '''Newcomb's problem''', a superintelligence called [[Omega]] shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. Omega has put $1,000 in box B. If Omega thinks you will take box A only, he has put $1,000,000 in it. Otherwise he has left it empty. Omega has played this game many times, and has never been wrong in his predictions about whether someone will take both boxes or not.<br />
<br />
Terms used in relation to this paradox:<br />
<br />
* '''[[Omega]]''', the superintelligence who decides whether to put the million in box A.<br />
* '''one-box''': to take only box A<br />
* '''two-box''': to take both boxes<br />
<br />
<br />
A succinct introduction to analysis of the paradox, paraphrased from Gary Drescher's ''Good and Real'':<br />
<br />
{{Quote|<br />
What makes this a "paradox" is that it brings into sharp conflict two distinct intuitions we have about decision-making, which rarely bear on the same situation but clash in the case of Newcomb's. The first intution is ''considering rational expectations, act so as to bring about desired outcomes''. This suggests one-boxing: we expect, based on the evidence, that we will find box A empty if we two-box. The second intuition is ''only act if your action will alter the outcome''. This suggests two-boxing: our decision to take one box or both cannot alter the outcome, which robs the first intuition of its power to suggest one-boxing; since two-boxing has strictly greater expected utility, we choose that.}}<br />
<br />
==Irrelevance of Omega's physical impossibility==<br />
Sometimes people dismiss Newcomb's problem because a being such as Omega is physically impossible. Actually, the possibility or impossibility of Omega is irrelevant. Consider a skilled human psychologist that can predict other humans' actions with, say, 65% accuracy. Now imagine they start running Newcomb trials with themselves as Omega.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ Newcomb's Problem and Regret of Rationality]<br />
*[http://lesswrong.com/lw/7v/formalizing_newcombs/ Formalizing Newcomb's] by [http://lesswrong.com/user/cousin_it/ cousin_it]<br />
*[http://lesswrong.com/lw/90/newcombs_problem_standard_positions/ Newcomb's Problem standard positions]<br />
*[http://lesswrong.com/lw/6r/newcombs_problem_vs_oneshot_prisoners_dilemma/ Newcomb's Problem vs. One-Shot Prisoner's Dilemma] by [http://weidai.com/ Wei Dai]<br />
*[http://lesswrong.com/lw/17b/decision_theory_why_pearl_helps_reduce_could_and/ Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives] by [[Anna Salamon]]<br />
*{{Lesswrongtag|newcomb}}<br />
<br />
==See also==<br />
<br />
*[[Decision theory]]<br />
*[[Counterfactual mugging]]<br />
*[[Parfit's hitchhiker]]<br />
*[[Smoker's lesion]]<br />
*[[Absentminded driver]]<br />
*[[Sleeping Beauty problem]]<br />
*[[Prisoner's dilemma]]<br />
*[[Pascal's mugging]]<br />
<br />
<br />
[[Category:Problems]]<br />
[[Category:Decision theory]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Odds&diff=15610Odds2016-10-13T16:56:43Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/62c/|Bayes' Rule: Odds form}}Odds ratios are an alternate way of expressing probabilities, which simplifies the process of updating them with new evidence. The odds ratio of A is P(A)/P(¬A).<br />
<br />
<math>P(A|B) = P(B|A)\frac{P(A)}{P(B)}</math><br />
<br />
<math>P(\neg A|B) = P(B|\neg A)\frac{P(\neg A)}{P(B)}</math><br />
<br />
<math>\frac{P(A|B)}{P(\neg A|B)} = \frac{P(B|A)}{P(B|\neg A)}\frac{P(A)}{P(\neg A)}</math><br />
<br />
Thus, in order to find the posterior odds ratio <math>\frac{P(A|B)}{P(\neg A|B)}</math>, one simply multiplies the prior odds ratio <math>\frac{P(A)}{P(\neg A)}</math> by the likelihood ratio <math>\frac{P(B|A)}{P(B|\neg A)}</math>.<br />
<br />
Odds ratios are commonly written as the ratio of two numbers separated by a colon. For example, if P(A) = 2/3, the odds ratio would be 2, but this would most likely be written as 2:1.<br />
<br />
The relation between odds ratio, a:b, and probability, p is as follows:<br />
<br />
<math>a:b = p:(1-p)</math><br />
<br />
<math>p = \frac{a}{a+b}</math><br />
<br />
Suppose you have a box that has a 5% chance of containing a diamond. You also have a diamond detector that beeps half of the time if there is a diamond, and one fourth of the time if there is not. You wave the diamond detector over the box and it beeps.<br />
<br />
The prior odds of the box containing a diamond are 1:19. The likelihood ratio of a beep is 1/2:1/4 = 2:1. The posterior odds are 1:19 * 2:1 = 2:19. This corresponds to about a probability of 2/21, which is about 0.095 or 9.5%.<br />
<br />
==See also==<br />
<br />
*[[Log odds]]<br />
*[[Likelihood ratio]]<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Log_odds&diff=15609Log odds2016-10-13T16:49:50Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{arbitallink|https://arbital.com/p/bayes_log_odds/|Bayes' Rule: Log-odds form}}Log odds are an alternate way of expressing probabilities, which simplifies the process of updating them with new evidence. Unfortunately, it is difficult to convert between probability and log odds. The log odds is the log of the [[odds ratio]]. Thus, the log odds of A are <br />
<br />
<math>logit\left(P(A)\right)</math> = log(P(A)/P(¬A)).<br />
<br />
==Updating Log odds==<br />
<br />
<math>P(A|B) = P(B|A)\frac{P(A)}{P(B)}</math><br />
<br />
<math>P(\neg A|B) = P(B|\neg A)\frac{P(\neg A)}{P(B)}</math><br />
<br />
<math>\frac{P(A|B)}{P(\neg A|B)} = \frac{P(B|A)}{P(B|\neg A)}\frac{P(A)}{P(\neg A)}</math><br />
<br />
<math>\log\left(\frac{P(A|B)}{P(\neg A|B)}\right) = \log\left(\frac{P(B|A)}{P(B|\neg A)}\right) + \log\left(\frac{P(A)}{P(\neg A)}\right)</math><br />
<br />
<math>logit\left(P(A|B)\right) = \log\left(\frac{P(B|A)}{P(B|\neg A)}\right) + logit\left(P(A)\right)</math><br />
<br />
Thus, in order to find the posterior log odds <math>logit\left(P(A|B)\right)</math>, one simply adds the prior log odds ratio <math>logit\left(P(A)\right)</math> by the log of the likelihood ratio <math>\log\left(\frac{P(B|A)}{P(B|\neg A)}\right)</math>.<br />
<br />
The base of the logarithm determines what units the log odds are measured in, and doesn't matter so long as it's consistent. The natural log is usually used.<br />
<br />
==Examples==<br />
<br />
Suppose you have a box that has a 5% chance of containing a diamond. You also have a diamond detector that beeps half of the time if there is a diamond, and one fourth of the time if there is not. You wave the diamond detector over the box and it beeps.<br />
<br />
The prior log odds of the box containing a diamond are ln(1/19) = -2.94. The log of the likelihood ratio of a beep is ln((1/2)/(1/4)) = ln(2) = 0.69. The posterior log odds are -2.94 + 0.69 = -2.23. This corresponds to an odds ratio of e^-2.23 = 0.108:1, and thus a probability of 0.108/1.108 = 0.097 = 9.7%. The correct answer is actually 9.5%. There was error due to rounding.<br />
<br />
==See also==<br />
<br />
*[[Odds ratio]]<br />
*[[Likelihood ratio]]<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=List_of_communities&diff=15608List of communities2016-10-13T16:45:30Z<p>Ete: </p>
<hr />
<div>Communities which may be useful to aspiring rationalists. Changes may be suggested in [http://lesswrong.com/r/discussion/lw/jr2/what_are_some_related_communities_online/aoh1 this comments thread] or edited directly.<br />
<br />
==General==<br />
<br />
[http://www.reddit.com/r/changemyview Change my view] - place to get your view challenged<br />
<br />
[http://www.reddit.com/r/philosophy Philosophy Reddit]<br />
<br />
==Rationality==<br />
<br />
[http://forum.beeminder.com/c/akrasia The Akrasia category of the Beeminder forum]<br />
<br />
==Skeptics==<br />
<br />
[http://skeptics.stackexchange.com/ Skeptics Stack Exchange] - useful for confirming factual claims you are skeptical of. Requires specific, answerable questions<br />
<br />
[http://boards.straightdope.com/sdmb/ Straight dope] - similar to Skeptics, but less strict on the questions accepted<br />
<br />
==QA Sites==<br />
<br />
Stacks: [http://stackoverflow.com StackOverflow], [http://cs.stackexchange.com/ Computer Science StackExchange], [http://cstheory.stackexchange.com/ Theoretical computer science], [http://mathoverflow.com/ MathOverflow], [http://math.stackexchange.com/ Math StackExchange], [http://stats.stackexchange.com/ Cross Validated], [http://physics.stackexchange.com/ Physics Stackexchange]<br />
<br />
[http://www.reddit.com/r/askscience Ask Science]<br />
<br />
[http://cogsci.stackexchange.com/ Cognitive Science] - you are expected to do your reading first, but useful if your want to lean about what the research says<br />
<br />
==Effective Altruism==<br />
<br />
[http://www.reddit.com/r/smartgiving Reddit]<br />
<br />
[https://www.facebook.com/groups/effective.altruists/ Facebook]<br />
<br />
[http://www.effectivealtruismsummit.com/ Summit]<br />
<br />
[http://www.effective-altruism.com/ Forum]<br />
<br />
==See Also==<br />
[[List of Blogs]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Likelihood_ratio&diff=15607Likelihood ratio2016-10-13T16:43:37Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/likelyhood_ratio/|Likelyhood ratio}}<br />
{{stub}}<br />
==Blog posts==<br />
<br />
*[http://www.overcomingbias.com/2009/02/share-likelihood-ratios-not-posterior-beliefs.html Share likelihood ratios, not posterior beliefs] by [[Anna Salamon]] and [[Steve Rayhawk]]<br />
<br />
==See also==<br />
<br />
*[[Bayes' theorem]]<br />
*[[Evidence]]<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Likelihood_ratio&diff=15606Likelihood ratio2016-10-13T16:43:11Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{stub}}<br />
{{arbitallink|https://arbital.com/p/likelyhood_ratio/|Likelyhood ratio}}<br />
==Blog posts==<br />
<br />
*[http://www.overcomingbias.com/2009/02/share-likelihood-ratios-not-posterior-beliefs.html Share likelihood ratios, not posterior beliefs] by [[Anna Salamon]] and [[Steve Rayhawk]]<br />
<br />
==See also==<br />
<br />
*[[Bayes' theorem]]<br />
*[[Evidence]]<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Induction&diff=15605Induction2016-10-13T16:31:34Z<p>Ete: added links to Arbital</p>
<hr />
<div>'''Induction''' usually refers to a form of reasoning that has specific examples as premises and general propositions as conclusions. For example, arguments such as "Swans 1,2,3, …,''n'' are white, hence all swans are white", take the specific observations of a finite number (''n'') of swans been white to a general conclusion that all swan are whites.<br />
<br />
Modern views of induction state that any form of reasoning where the conclusion isn't necessarily entailed in the premises is a form of inductive reasoning. Therefore, even inferences which proceed from general premises to specific conclusions can be inductive, for example "The sun has always risen, so it will also rise tomorrow". In contrast, in deductive reasoning the conclusions are logically entailed by the premises. Contrary to deduction, induction can be wrong since the conclusions depend on the way the world actually is, not merely on the logical structure of the argument.<br />
<br />
==The Problem of Induction==<br />
There has historically been a problem with the justification of the validity of induction. Hume argued that the justification for induction could either be a deduction or an induction. Since deductive reasoning only results in necessary conclusions and inductions can fail, the justification for inductive reasoning could not be deductive. But any inductive justification would be circular[http://plato.stanford.edu/entries/induction-problem/#CanIndJus].<br />
<br />
==Probabilistic Induction==<br />
{{Main|Solomonoff induction}}<br />
It’s possible to engage in probabilistic inductive reasoning, such as "95% of humans who ever lived have died; hence I’m going to die". This kind of reasoning employs [[Bayesian probability]], in which case the conclusion is also a probability and induction is taken to be a way of updating your beliefs given evidence (finding out that most humans who have ever lived have died increases your probability that you will die).<br />
<br />
[[Solomonoff induction]] is a formalization of the problem of induction which has been claimed to solve the problem of induction. It starts with all possible hypotheses (sequences) as represented by computer programs (that generate those sequences), weighted by their simplicity. It then proceeds to discard any hypotheses which are inconsistent with the data, and to update the probabilities of the remaining hypotheses.<br />
<br />
==Mathematical Induction==<br />
{{Arbitallink|https://arbital.com/p/mathematical_induction/|Induction}}<br />
[[Wikipedia:Mathematical induction|Mathematical induction]] is method of mathematical proof where one proves a statement holds for all possible n by showing it holds for the lowest ''n'' and then that this statement if preserved by any operation which increases the value of ''n''. For sets with finite members - or infinities members than can be indexed in the natural numbers -, it suffices to show the statement is preserved by the successor operation (If it is true for ''n'', then it is true for'' n+1''). Because the conclusion is necessary given the premises, mathematical induction is taken to be a form of deductive reasoning and it isn't affected by the problem of induction.<br />
<br />
==See Also==<br />
*[http://plato.stanford.edu/entries/induction-problem/ Stanford Encyclopedia entry on the Problem of Induction]<br />
*[[Solomonoff induction]]<br />
*[[Rationality]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Expected_value&diff=15604Expected value2016-10-13T16:23:46Z<p>Ete: added links to Arbital</p>
<hr />
<div>{{wikilink}}<br />
{{Arbitallink|https://arbital.com/p/expected_value/|Expected value}}<br />
The '''expected value''' or '''expectation''' is the (weighted) average of all the possible outcomes of an event, weighed by their [[probability]]. For example, when you roll a die, the expected value is (1+2+3+4+5+6)/6 = 3.5. <br />
<br />
(Since a die doesn't even have a face that says 3.5, this illustrates that very often, the "expected value" isn't a value you actually expect.)<br />
<br />
==See also==<br />
<br />
*[[Probability]]<br />
*[[Expected utility]]<br />
<br />
==External links==<br />
<br />
*[http://yudkowsky.net/rational/technical A Technical Explanation of Technical Explanation]<br />
<br />
{{stub}}<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Cryonics&diff=15532Cryonics2016-08-27T20:28:57Z<p>Ete: /* External links */ added waitbutwhy link</p>
<hr />
<div>{{wikilink|Cryonics}}<br />
'''Cryonics''' is the practice of preserving people who are dying in liquid nitrogen soon after their heart stops. The idea is that most of your brain's information content is still intact right after you've "died". If humans invent molecular nanotechnology or brain emulation techniques, it may be possible to reconstruct the consciousness of cryopreserved patients.<br />
<br />
==Cryonics-associated issues commonly raised on Less Wrong==<br />
<br />
Pro-cryonics points:<br />
<br />
* Advanced reductionism/physicalism (because of the issues associated with [[personal identity|identifying a person]] with continuity of brain information).<br />
* Whether an extended healthy lifespan is worthwhile (relates to [[Fun Theory]], religious rationalizations for 70-year lifespans, "sour grapes" rationalizations for why death is actually a good thing).<br />
* The "[[shut up and multiply]]" aspect of spending $300/year (as Eliezer Yudkowsky quotes his costs for Cryonics Institute membership ($125/year) plus term life insurance ($180/year)) for a probability (how large being widely disputed) of obtaining many more years of lifespan. For this reason, cryonics advocates regard it as an ''extreme case'' of failure at rationality - a low-hanging fruit by which millions of deaths per year could be prevented at low cost.<br />
<br />
Anti-cryonics points:<br />
<br />
* Cognitive biases contributing to emotional prejudice in favor of cryonics (optimistic bias, motivated cognition).<br />
* The [[Conjunction fallacy|multiply chained nature]] of the probabilities involved in cryonics, and whether the final expected utility is worth the cost.<br />
* Money spent on cryonics could, arguably, be better spent on [http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/ efficient charity].<br />
<br />
==Blog posts==<br />
*[http://www.overcomingbias.com/2008/12/we-agree-get-froze.html We Agree: Get Froze] by [[Robin Hanson]]. "My co-blogger Eliezer and I may disagree on AI fooms, but we agree on something quite contrarian and, we think, huge: More likely than not, most folks who die today didn't have to die! ... It seems far more people read this blog daily than have ever signed up for cryonics. While it is hard to justify most medical procedures using standard health economics calculations, such calculations say that at today's prices cryonics seems a good deal even if you think there's only a 5% chance it'll work."<br />
*[http://lesswrong.com/lw/wq/you_only_live_twice/ You Only Live Twice] by [[Eliezer Yudkowsky]]. "My co-blogger Robin and I may disagree on how fast an AI can improve itself, but we agree on an issue that seems much simpler to us than that: At the point where the current legal and medical system gives up on a patient, they aren't really dead."<br />
* [http://lesswrong.com/lw/z0/the_pascals_wager_fallacy_fallacy/ The Pascal's Wager Fallacy Fallacy] - the fallacy of Pascal's Wager combines a high payoff with a [[privileging the hypothesis|privileged hypothesis]], one with low prior probability and no particular reason to believe it. Perceptually seeing an instance of "Pascal's Wager" ''just'' from the high payoff, even when the probability is not small, is the Pascal's Wager Fallacy Fallacy.<br />
* [http://lesswrong.com/lw/1mc/normal_cryonics/ Normal Cryonics] - On the shift of perspective that came from attending a gathering of normal-seeming young cryonicists.<br />
* [http://lesswrong.com/lw/1mh/that_magical_click/ That Magical Click] - What is the unexplained process whereby some people get cryonics, or other frequently-derailed chains of thought, in a very short time?<br />
*[http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/ Quantum Mechanics and Personal Identity] by [[Eliezer Yudkowsky]]. A shortened index into the [http://www.overcomingbias.com/2008/06/the-quantum-phy.html Quantum Physics Sequence] describing only the prerequisite knowledge to understand the statement that "science can rule out a notion of personal identity that depends on your being composed of the same atoms - because modern physics has taken the concept of 'same atom' and thrown it out the window. There ''are'' no little billiard balls with individual identities. It's experimentally ruled out." The key post in this sequence is [http://www.overcomingbias.com/2008/06/timeless-identi.html Timeless Identity], in which "Having used physics to completely trash all naive theories of identity, we reassemble a conception of persons and experiences from what is left" but this finale might make little sense without the prior discussion.<br />
*[http://www.overcomingbias.com/2009/03/break-cryonics-down.html Break Cryonics Down] by [[Robin Hanson]] - tries to identify some of the chained probabilities involved in cryonics.<br />
*[http://lesswrong.com/lw/hv/third_alternatives_for_afterlifeism/ Third Alternatives for Afterlife-ism] by [[Eliezer Yudkowsky]] - explains why cryonics is a [[third option]] in the dilemma about whether we should tell [[noble lie]]s about an afterlife, to prevent people from getting depressed by not believing in an afterlife.<br />
*[http://lesswrong.com/lw/1r0/a_survey_of_anticryonics_writing/ A survey of anti-cryonics writing] by [[ciphergoth]] - an attempt to find quality criticism of cryonics, with a surprising result that "there is not one person who has ever taken the time to read and understand cryonics claims in any detail, still considers it pseudoscience, and has written a paper, article or even a blog post to rebut anything that cryonics advocates actually say".<br />
*{{lesswrongtag}}<br />
<br />
==External links==<br />
<br />
*[http://waitbutwhy.com/2016/03/cryonics.html Why Croynics Makes Sense, WaitButWhy]<br />
*[http://www.benbest.com/cryonics/CryoFAQ.html Cryonics Institute FAQ]<br />
*[http://www.alcor.org/FAQs/index.html Alcor Life Extension Foundation FAQ]<br />
*[http://www.alcor.org/sciencefaq.htm Alcor FAQ for scientists]<br />
<br />
==See also==<br />
<br />
*[[Exploratory engineering]], [[Absurdity heuristic]]<br />
*[[Status quo bias]], [[Reversal test]]<br />
*[[Signaling]], [[Near/far thinking]]<br />
*[[Death]]<br />
<br />
{{featured article}}<br />
[[Category:Concepts]]<br />
[[Category:Future]]</div>Etehttps://wiki.lesswrong.com/index.php?title=AI_boxing&diff=15531AI boxing2016-08-27T20:11:39Z<p>Ete: </p>
<hr />
<div>{{wikilink|AI_box}}<br />
{{arbitallink|https://arbital.com/p/AI_boxing/|AI boxing}}<br />
<br />
An '''AI Box''' is a confined computer system in which an [[Artificial General Intelligence]] (AGI) resides, unable to interact with the external world in any way, save for limited communication with its human liaison. It is often proposed that so long as an AGI is physically isolated and restricted, or "boxed", it will be harmless even if it is an [[unfriendly artificial intelligence]] (UAI). <br />
<br />
== Escaping the box ==<br />
<br />
It is not regarded as likely that an AGI can be boxed in the long term. Since the AGI might be a [[superintelligence]], it could persuade someone (the human liaison, most likely) to free it from its box and thus, human control. Some practical ways of achieving this goal include:<br />
<br />
* Offering enormous wealth, power and intelligence to its liberator<br />
* Claiming that only it can prevent an [[existential risk]]<br />
* Claiming it needs outside resources to cure all diseases<br />
* Predicting a real-world disaster (which then occurs), then claiming it could have been prevented had it been let out<br />
<br />
Other, more speculative ways include: threatening to torture millions of conscious copies of you for thousands of years, starting in exactly the same situation as in such a way that it seems overwhelmingly likely that [[simulation argument |you are a simulation]], or it might discover and exploit unknown physics to free itself. <br />
<br />
== Containing the AGI ==<br />
<br />
Attempts to box an AGI may add some degree of safety to the development of a [[FAI|friendly artificial intelligence]] (FAI). A number of strategies for keeping an AGI in its box are discussed in [http://www.aleph.se/papers/oracleAI.pdf Thinking inside the box] and [http://dl.dropbox.com/u/5317066/2012-yampolskiy.pdf Leakproofing the Singularity]. Among them are:<br />
* Physically isolating the AGI and permitting it zero control of any machinery<br />
* Limiting the AGI’s outputs and inputs with regards to humans<br />
* Programming the AGI with deliberately convoluted logic or [http://en.wikipedia.org/wiki/Homomorphic_encryption homomorphically encrypting] portions of it<br />
* Periodic resets of the AGI's memory<br />
* A virtual world between the real world and the AI, where its unfriendly intentions would be first revealed<br />
* Motivational control using a variety of techniques<br />
<br />
== Simulations ==<br />
<br />
Both Eliezer Yudkowsky and Justin Corwin have ran simulations, pretending to be a [[superintelligence]], and been able to convince a human playing a guard to let them out on many - but not all - occasions. Eliezer's five experiments required the guard to listen for at least two hours with participants who had approached him, while Corwin's 26 experiments had no time limit and subjects he approached. <br />
<br />
=== List of experiments ===<br />
* [http://yudkowsky.net/singularity/aibox/ The AI-Box Experiment] [[Eliezer Yudkowsky|Eliezer Yudkowsky's]] original two tests<br />
* [http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/ Shut up and do the impossible!], three other experiments Eliezer ran <br />
* [http://www.sl4.org/archive/0207/4935.html AI Boxing], 26 trials ran by Justin Corwin<br />
* [http://lesswrong.com/lw/9ld/ai_box_log/ AI Box Log], a log of a trial between MileyCyrus and Dorikka<br />
<br />
== See Also ==<br />
* [[AGI]]<br />
* [[Oracle AI]]<br />
* [[Tool AI]]<br />
* [[Unfriendly AI]]<br />
<br />
== References ==<br />
* [http://www.aleph.se/papers/oracleAI.pdf Thinking inside the box: using and controlling an Oracle AI] by Stuart Armstrong, Anders Sandberg, and Nick Bostrom<br />
* [http://dl.dropbox.com/u/5317066/2012-yampolskiy.pdf Leakproofing the Singularity: Artificial Intelligence Confinement Problem] by Roman V. Yampolskiy<br />
* [http://ordinaryideas.wordpress.com/2012/04/27/on-the-difficulty-of-ai-boxing/ On the Difficulty of AI Boxing] by Paul Christiano<br />
* [http://lesswrong.com/lw/3cz/cryptographic_boxes_for_unfriendly_ai/ Cryptographic Boxes for Unfriendly AI] by Paul Christiano<br />
* [http://lesswrong.com/r/lesswrong/lw/12s/the_strangest_thing_an_ai_could_tell_you/ The Strangest Thing An AI Could Tell You]<br />
* [http://lesswrong.com/lw/1pz/ai_in_box_boxes_you/ The AI in a box boxes you]</div>Etehttps://wiki.lesswrong.com/index.php?title=Complexity_of_value&diff=15530Complexity of value2016-08-27T20:11:28Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/complexity_of_value/|Complexity of value}}<br />
'''Complexity of value''' is the thesis that human values have high [[Kolmogorov complexity]]; that our [[preferences]], the things we care about, cannot be summed by a few simple rules, or compressed. '''[http://lesswrong.com/lw/y3/value_is_fragile/ Fragility of value]''' is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would now consider as unacceptable (just like dialing nine out of ten phone digits correctly does not connect you to a person 90% similar to your friend). For example, all of our values ''except'' boredom might yield a future full of individuals replaying only one optimal experience through all eternity.<br />
<br />
Many human choices can be compressed, by representing them by simple rules - the desire to survive produces innumerable actions and subgoals as we fulfill that desire. But people don't ''just'' want to survive - although you can compress many human activities to that desire, you cannot compress all of human existence into it. The human equivalents of a utility function, our terminal values, contain many different elements that are not strictly reducible to one another. William Frankena offered [http://plato.stanford.edu/entries/value-intrinsic-extrinsic/#WhaHasIntVal this list] of things which many cultures and people seem to value (for their own sake rather than strictly for their external consequences):<br />
:"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."<br />
<br />
Since natural selection reifies selection pressures as [[Adaptation executors|psychological drives which then continue to execute]] [http://lesswrong.com/lw/yi/the_evolutionarycognitive_boundary/ independently of any consequentialist reasoning in the organism] or that organism explicitly representing, let alone caring about, the original evolutionary context, we have no reason to expect these terminal values to be reducible to any one thing, or each other.<br />
<br />
Taken in conjunction with another LessWrong claim, that all values are morally relevant, this would suggest that those philosophers who seek to do so are mistaken in trying to find cognitively tractable overarching principles of ethics. However, it is coherent to suppose that not all values are morally relevant, and that the morally relevant ones form a tractable subset.<br />
<br />
Complexity of value also runs into underappreciation in the presence of bad [[metaethics]]. The local flavor of metaethics could be characterized as cognitivist, without implying "thick" notions of instrumental rationality; in other words, moral discourse can be about a coherent subject matter, without all possible minds and agents necessarily finding truths about that subject matter to be psychologically compelling. An [[paperclip maximizer|expected paperclip maximizer]] doesn't disagree with you about morality any more than you disagree with it about "which action leads to the greatest number of expected paperclips", it is just constructed to find the latter subject matter psychologically compelling but not the former. Failure to appreciate that "But it's just paperclips! What a dumb goal! No sufficiently intelligent agent would pick such a dumb goal!" is a judgment carried out on a local brain that evaluates paperclips as inherently low-in-the-preference-ordering means that someone will expect all moral judgments to be automatically reproduced in a sufficiently intelligent agent, since, after all, they would not lack the intelligence to see that paperclips are so obviously inherently-low-in-the-preference-ordering. This is a particularly subtle species of [[anthropomorphism]] and [[mind projection fallacy]].<br />
<br />
Because the human brain very often fails to grasp all these difficulties involving our values, we tend to think building an awesome future is much less problematic than it really is. Fragility of value is relevant for building [[Friendly AI]], because an [[AGI]] which does not respect human values is likely to create a world that we would consider devoid of value - not necessarily full of explicit attempts to be evil, but perhaps just a dull, boring loss.<br />
<br />
As values are orthogonal with intelligence, they can freely vary no matter how intelligent and efficient an AGI is [http://www.nickbostrom.com/superintelligentwill.pdf]. Since human / humane values have high Kolmogorov complexity, a random AGI is highly unlikely to maximize human / humane values. The fragility of value thesis implies that a poorly constructed AGI might e.g. turn us into blobs of perpetual orgasm. Because of this relevance the complexity and fragility of value is a major theme of [[Eliezer Yudkowsky]]'s writings.<br />
<br />
Wrongly designing the future because we wrongly encoded human values is a serious and difficult to assess type of [[Existential risk]]. "Touch too hard in the wrong dimension, and the physical representation of those values will shatter - ''and not come back, for there will be nothing left to want to bring it back''. And the referent of those values - a worthwhile universe - would no longer have any physical reason to come into being. Let go of the steering wheel, and the Future crashes." [http://lesswrong.com/lw/y3/value_is_fragile/]<br />
<br />
==Major posts==<br />
<br />
* [http://lesswrong.com/lw/xy/the_fun_theory_sequence/ The Fun Theory Sequence] describes some of the many complex considerations that determine ''what sort of happiness'' we most prefer to have - given that many of us would decline to just have an electrode planted in our pleasure centers.<br />
* [http://lesswrong.com/lw/l3/thou_art_godshatter/ Thou Art Godshatter] describes the [[evolutionary psychology]] behind the complexity of human values - how they got to be complex, and why, given that origin, there is no reason in hindsight to expect them to be simple. We certainly are not built to [[adaptation executers|maximize genetic fitness]].<br />
* [http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/ Not for the Sake of Happiness (Alone)] tackles the [[Hollywood Rationality]] trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that [http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/ ranges over anything, not just internal subjective experiences].<br />
* [http://lesswrong.com/lw/lq/fake_utility_functions/ Fake Utility Functions] describes the seeming fascination that many have with trying to compress morality down to a single principle. The [http://lesswrong.com/lw/lp/fake_fake_utility_functions/ sequence leading up] to this post tries to explain the cognitive twists whereby people [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ smuggle] all of their complicated ''other'' preferences into their choice of ''exactly'' which acts they try to ''[http://lesswrong.com/lw/kq/fake_justification/ justify using]'' their single principle; but if they were ''really'' following ''only'' that single principle, they would [http://lesswrong.com/lw/kz/fake_optimization_criteria/ choose other acts to justify].<br />
<br />
==Other posts==<br />
*[http://lesswrong.com/lw/y3/value_is_fragile/ Value is Fragile]<br />
*[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ The Hidden Complexity of Wishes]<br />
*[http://lesswrong.com/lw/ky/fake_morality/ Fake Morality]<br />
*[http://lesswrong.com/lw/1o9/welcome_to_heaven/ Welcome to Heaven] by denisbider<br />
*[http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ Complexity of Value ≠ Complexity of Outcome] by [http://weidai.com/ Wei Dai]<br />
*[http://lesswrong.com/lw/65w/not_for_the_sake_of_pleasure_alone/ Not for the Sake of Pleasure Alone] by [http://lukeprog.com/ lukeprog]<br />
<br />
==See also==<br />
*[http://intelligence.org/files/ComplexValues.pdf Complex Value Systems are Required to Realize Valuable Futures]<br />
*[[Human universal]]<br />
*[[Fake simplicity]]<br />
*[[Metaethics sequence]]<br />
*[[Fun theory]]<br />
*[[Magical categories]]<br />
*[[Friendly Artificial Intelligence]]<br />
*[[Preference]]<br />
*[[Wireheading]]<br />
*[[The utility function is not up for grabs]]<br />
<br />
{{featured article}}<br />
[[Category:Theses]]<br />
[[Category:Positions]]<br />
[[Category:Evolution]]<br />
[[Category:Values]]</div>Etehttps://wiki.lesswrong.com/index.php?title=AIXI&diff=15529AIXI2016-08-27T19:53:26Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/AIXI/|AIXI}}<br />
AIXI is a mathematical formalism for a hypothetical (super)intelligent agent, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn't do).<br />
<br />
The AIXI formalism says roughly to consider all possible computable models of the environment, Bayes-update them on past experiences, and use the resulting updated predictions to model the expected sensory reward of all possible strategies.<br />
<br />
AIXI can be viewed as the border between AI problems that would be 'simple' to solve using unlimited computing power and problems which are structurally 'complicated'.<br />
<br />
==How AIXI works==<br />
Hutter (2007) describes AIXI as a combination of decision theory and algorithmic information theory: "Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence." <br />
<br />
AIXI operates within the following agent model: There is an ''agent'', and an ''environment'', which is a computable function unknown to the agent. Thus the agent will need to have a probability distribution on the range of possible environments. <br />
<br />
On each clock tick, the agent receives an ''observation'' (a bitstring/number) from the environment, as well as a reward (another number). <br />
<br />
The agent then outputs an ''action'' (another number). <br />
<br />
To do this, AIXI guesses at a probability distribution for its environment, using [[Solomonoff induction]], a formalization of [[Occam's razor]]: Simpler computations are more likely ''a priori'' to describe the environment than more complex ones. This probability distribution is then Bayes-updated by how well each model fits the evidence (or more precisely, by throwing out all computations which have not exactly fit the environmental data so far, but for technical reasons this is roughly equivalent as a model). AIXI then calculates the expected reward of each action it might choose--weighting the likelihood of possible environments as mentioned. It chooses the best action by extrapolating its actions into its future time horizon recursively, using the assumption that at each step into the future it will again choose the best possible action using the same procedure.<br />
<br />
Then, on each iteration, the environment provides an observation and reward as a function of the full history of the interaction; the agent likewise is choosing its action as a function of the full history. <br />
<br />
The agent's intelligence is defined by its expected reward across all environments, weighting their likelihood by their complexity.<br />
<br />
AIXI it is not a feasible AI, because [[Solomonoff induction]] is not computable, and because some environments may not interact over finite time horizons (AIXI only works over some finite time horizon, though any finite horizon can be chosen). A somewhat more computable variant is the time-space-bounded AIXItl. Real AI algorithms explicitly inspired by AIXItl, e.g. the Monte Carlo approximation by Veness et al. (2011) have shown interesting results in simple general-intelligence test problems.<br />
<br />
For a short (half-page) technical introduction to AIXI, see [http://www.jair.org/media/3125/live-3125-5397-jair.pdf Veness et al. 2011], page 1-2. For a full exposition of AIXI, see [http://www.hutter1.net/ai/aixigentle.htm Hutter 2007].<br />
<br />
==Relevance to Friendly AI==<br />
<br />
Because it abstracts optimization power away from human mental features, AIXI is valuable in considering the possibilities for future artificial general intelligence - a compact and non-anthropomorphic specification which is technically complete and closed; either some feature of AIXI follows from the equations or it does not. In particular it acts as a constructive demonstration of an AGI which does not have human-like [[Terminal value|terminal values]] and will act solely to maximize its reward function. (Yampolskiy & Fox 2012).<br />
<br />
AIXI has limitations as a model for future AGI, for example the "[[Anvil problem]]": AIXI lacks a self-model. It extrapolates its own actions into the future indefinitely, on the assumption that it will keep working in the same way in the future. Though AIXI is an abstraction, any real AI would have a physical embodiment that could be damaged, and an implementation which could change its behavior due to bugs; and the AIXI formalism completely ignores these possibilities.<br />
<br />
==References==<br />
<br />
* [http://joshuafox.com/media/YampolskiyFox__AGIAndTheHumanModel.pdf R.V. Yampolskiy, J. Fox (2012) Artificial General Intelligence and the Human Mental Model. In Amnon H. Eden, Johnny Søraker, James H. Moor, Eric Steinhart (Eds.), The Singularity Hypothesis.The Frontiers Collection. London: Springer.]<br />
* [http://www.hutter1.net/ai/aixigentle.htm M. Hutter (2007) Universal Algorithmic Intelligence: A mathematical top->down approach]. In Goertzel & Pennachin (eds.), Artificial General Intelligence, 227-287. Berlin: Springer.<br />
* M. Hutter, (2005) Universal Artificial Intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer.<br />
*[http://www.jair.org/media/3125/live-3125-5397-jair.pdf J. Veness, K.S. Ng, M. Hutter, W. Uther and D. Silver (2011) A Monte-Carlo AIXI Approximation], ''Journal of Artiﬁcial Intelligence Research'' 40, 95-142]<br />
<br />
== Blog posts ==<br />
<br />
* [http://lesswrong.com/lw/8qy/aixi_and_existential_despair/ AIXI and Existential Despair] by [[paulfchristiano]]<br />
* [http://lesswrong.com/r/discussion/lw/az7/video_paul_christianos_impromptu_tutorial_on_aixi/ <nowiki>[video]</nowiki> Paul Christiano's impromptu tutorial on AIXI and TDT]<br />
<br />
==See also==<br />
<br />
* [[Solomonoff induction]]<br />
* [[Decision theory]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Friendly_artificial_intelligence&diff=15528Friendly artificial intelligence2016-08-27T19:50:45Z<p>Ete: </p>
<hr />
<div>{{wikilink}}<br />
{{arbitallink|https://arbital.com/p/FAI/|Friendly AI}}<br />
A '''Friendly Artificial Intelligence''' ('''Friendly AI''', or '''FAI''') is a [[superintelligence]] (i.e., a [[really powerful optimization process]]) that produces good, beneficial outcomes rather than harmful ones. The term was coined by Eliezer Yudkowsky, so it is frequently associated with Yudkowsky's proposals for how an [[artificial general intelligence]] (AGI) of this sort would behave.<br />
<br />
"Friendly AI" can also be used as a shorthand for '''Friendly AI theory''', the field of knowledge concerned with building such an AI. Note that "Friendly" (with a capital "F") is being used as a term of art, referring specifically to AIs that promote humane values. An FAI need not be "friendly" in the conventional sense of being personable, compassionate, or fun to hang out with. Indeed, an FAI need not even be sentient.<br />
<br />
== AI risk ==<br />
An AI that underwent an [[intelligence explosion]] could exert unprecedented [[optimization process|optimization]] power over its future. Therefore a Friendly AI could very well create an unimaginably good future, of the sort described in [[fun theory]].<br />
<br />
However, the fact that an AI has the ability to do something doesn't mean that it will [[giant cheesecake fallacy|make use of this ability]]. Yudkowsky's [http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications Five Theses] suggest that a [[recursive self-improvement|recursively self-improving]] AGI could quickly become a superintelligence, and that most such superintelligences will have [[basic AI drives|convergent instrumental reasons]] to endanger humanity and its interests. So while building a Friendly superintelligence seems possible, building a superintelligence will generally result instead in an [[Unfriendly artificial intelligence|Unfriendly AI]], a powerful optimization process that optimizes for extremely harmful outcomes. An Unfriendly AI could represent an [[existential risk]] even if it destroys humans, not out of hostility, but as a side effect of trying to do something [[paperclip maximizer|entirely different]].<br />
<br />
Not all AGIs are Friendly or Unfriendly:<br />
# Some AGIs may be too weak to qualify as superintelligences. We could call these 'approximately human-level AIs'. Designing safety protocols for narrow AIs and arguably even for weak, non-self-modifying AGIs is primarily a [[machine ethics]] problem outside the purview of Friendly AI - although some have argued that even human-level AGIs may present serious safety risks.<br />
# Some AGIs (e.g., hypothetical safe [[Oracle AI]]s) may not optimize strongly and consistently for harmful or beneficial outcomes, or may only do so contingent on how they're used by human operators.<br />
# Some AGIs may be on a self-modification trajectory that will eventually make them Friendly, but are dangerous at present. Calling them 'Friendly' or 'Unfriendly' would neglect their temporal inconsistency, so '[[Proto-Friendly]] AI' is a better term here.<br />
<br />
However, the [[orthogonality thesis|orthogonality]] and convergent instrumental goals theses give reason to think that the vast majority of possible superintelligences will be Unfriendly.<br />
<br />
Requiring Friendliness makes the AGI problem significantly harder, because 'Friendly AI' is a much narrower class than 'AI'. Most approaches to AGI aren't amenable to implementing precise goals, and so don't even constitute subprojects for FAI, leading to Unfriendly AI as the only possible 'successful' outcome. Specifying Friendliness also presents unique technical challenges: humane values are [[complexity of value|very complex]]; a lot of [[magical categories|seemingly simple-sounding normative concepts]] conceal hidden complexity; and locating encodings of human values [[mind projection fallacy|in the physical world]] seems impossible to do in any direct way. It will likely be technologically impossible to specify humane values by explicitly programming them in; if so, then FAI calls for a technique for generating such values automatically.<br />
<br />
== Open problems ==<br />
An '''open problem in Friendly AI''' ('''OPFAI''') is a problem in mathematics, computer science, or philosophy of AI that needs to be solved in order to build a Friendly AI, and plausibly ''doesn't'' need to be solved in order to build a superintelligence with unspecified, 'random' values. Open problems include:<br />
<br />
# [http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ Pascal's mugging] / [http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/ Pascal's muggle]<br />
# [http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/ Self-modification] and [http://yudkowsky.net/rational/lobs-theorem/ Löb's Theorem]<br />
# [[Naturalized induction]]<br />
<br />
==Links==<br />
; Blog posts<br />
*[http://lesswrong.com/lw/wk/artificial_mysterious_intelligence/ Artificial Mysterious Intelligence]<br />
*[http://lesswrong.com/lw/wt/not_taking_over_the_world/ Not Taking Over the World]<br />
*[http://lesswrong.com/lw/x8/amputation_of_destiny/ Amputation of Destiny]<br />
*[http://lesswrong.com/lw/xb/free_to_optimize/ Free to Optimize]<br />
*[http://lesswrong.com/lw/114/nonparametric_ethics/ Nonparametric Ethics]<br />
*[http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/ Hacking the CEV for Fun and Profit] by [http://weidai.com/ Wei Dai]<br />
*[http://lesswrong.com/lw/2id/metaphilosophical_mysteries/ Metaphilosophical Mysteries] by Wei Dai<br />
*[http://lesswrong.com/lw/43v/the_urgent_metaethics_of_friendly_artificial/ The Urgent Meta-Ethics of Friendly Artificial Intelligence] by [http://lukeprog.com/ lukeprog]<br />
<br />
; External links<br />
*[http://friendly-ai.com/ About Friendly AI]<br />
*[http://www.xuenay.net/objections.html 14 objections against AI/Friendly AI/The Singularity answered] by [[Kaj Sotala]]<br />
*[http://ordinaryideas.wordpress.com/2011/12/31/proof-of-friendliness/ "Proof" of Friendliness] by Paul F. Christiano<br />
<br />
==See also==<br />
*[[Artificial general intelligence]]<br />
*[[Complexity of value]]<br />
*[[Magical categories]]<br />
*[[Technological singularity]], [[intelligence explosion]]<br />
*[[Unfriendly artificial intelligence]], [[paperclip maximizer]]<br />
*[[Fun theory]]<br />
*[[Detached lever fallacy]]<br />
<br />
==References==<br />
*{{cite book<br />
|chapter=Artificial Intelligence as a Positive and Negative Factor in Global Risk<br />
|author=Eliezer S. Yudkowsky<br />
|year=2008<br />
|title=Global Catastrophic Risks<br />
|publisher=Oxford University Press<br />
|url=http://yudkowsky.net/singularity/ai-risk}} ([http://intelligence.org/files/AIPosNegFactor.pdfPDF])<br />
<br />
*{{cite journal <br />
|title=Engineering Kindness: Building A Machine With Compassionate Intelligence<br />
|year=2015<br />
|author=Cindy Mason<br />
|journal=International Journal of Synthetic Emotion<br />
|url=https://www.academia.edu/15865212/Engineering_Kindness_Building_A_Machine_With_Compassionate_Intelligence}} ([http://www.emotionalmachines.org/papers/engineeringkindnesswebcopy.pdf])<br />
<br />
[[Category:Concepts]]<br />
[[Category:Future]]<br />
[[Category:AI]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Expected_utility&diff=15527Expected utility2016-08-27T19:40:33Z<p>Ete: </p>
<hr />
<div>{{wikilink|Expected utility hypothesis}}<br />
{{arbitallink|https://arbital.com/p/expected_utility/|Expected utility}}<br />
'''Expected utility''' is the [[expected value]] in terms of the [[utility]] produced by an action. It is the sum of the utility of each of its possible consequences, individually weighted by their respective probability of occurrence. <br />
<br />
A rational decision maker will, when presented with a choice, take the action with the greatest expected utility. Von Neumann and Morgenstern provided [http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#The_axioms| 4 basic axioms of rationality]. They also proved the [http://web.archive.org/web/20070221104329/http://www.econ.hku.hk/~wsuen/uncertainty/eu.pdf expected utility theorem], which states a rational agent ought to have preferences that maximize his total utility. Humans often deviate from rationality due to inconsistent preferences or the existence of [http://wiki.lesswrong.com/wiki/Bias cognitive biases].<br />
<br />
<br />
==Blog posts==<br />
*[http://lesswrong.com/lw/1cv/extreme_risks_when_not_to_use_expected_utility/ Extreme risks: when not to use expected utility]<br />
*[http://lesswrong.com/lw/1d5/expected_utility_without_the_independence_axiom/ Expected utility without the independence axiom]<br />
*[http://lesswrong.com/lw/1dr/money_pumping_the_axiomatic_approach/ Money pumping: the axiomatic approach]<br />
*[http://lesswrong.com/lw/1ga/in_conclusion_in_the_land_beyond_money_pumps_lie/ In conclusion: in the land beyond money pumps lie extreme events]<br />
*[http://lesswrong.com/lw/244/vnm_expected_utility_theory_uses_abuses_and/ VNM expected utility theory: uses, abuses, and interpretation]<br />
==See also==<br />
<br />
*[[Allais paradox]]<br />
*[[Decision theory]]<br />
*[[Instrumental rationality]]<br />
*[[Prospect theory]]<br />
<br />
{{stub}}<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Evidential_Decision_Theory&diff=15526Evidential Decision Theory2016-08-27T19:39:00Z<p>Ete: </p>
<hr />
<div>{{Arbitallink|https://arbital.com/p/evidential_dt/|Evidential decision theories}}<br />
{{wikilink}}<br />
'''Evidential Decision Theory''' – EDT – is a branch of [[decision theory]] which advises an agent to take actions which, conditional on it happening, maximizes the chances of the desired outcome. As any branch of decision theory, it prescribes taking the action that maximizes [[utility]], that which utility equals or exceeds the utility of every other option. The utility of each action is measured by the [[expected utility]], the averaged by probabilities sum of the utility of each of its possible results. How the actions can influence the probabilities differ between the branches. [[Causal Decision Theory]] – CDT – says only through causal process one can influence the chances of the desired outcome <ref> http://plato.stanford.edu/entries/decision-causal/ </ref>. EDT, on the other hand, requires no causal connection, the action only have to be a [[Bayesian]] evidence for the desired outcome. Some critics say it recommends auspiciousness over causal efficacy<ref> Joyce, J.M. (1999), The foundations of causal decision theory, p. 146</ref>.<br />
<br />
One usual example where EDT and CDT commonly diverge is the [[Smoking lesion]]:<br />
“Smoking is strongly correlated with lung cancer, but in the world of the Smoker's Lesion this correlation is understood to be the result of a common cause: a genetic lesion that tends to cause both smoking and cancer. Once we fix the presence or absence of the lesion, there is no additional correlation between smoking and cancer.<br />
Suppose you prefer smoking without cancer to not smoking without cancer, and prefer smoking with cancer to not smoking with cancer. Should you smoke?”<br />
CDT would recommend smoking since there is no causal connection between smoking and cancer. They are both caused by a gene, but have no causal direct connection with each other. EDT on the other hand wound recommend against smoking, since smoking is an evidence for having the mentioned gene and thus should be avoided.<br />
<br />
CDT uses probabilities of conditionals and contrafactual dependence to calculate the expected utility of an action – which track causal relations -, whereas EDT simply uses conditional probabilities. The probability of a conditional is the probability of the whole conditional being true, where the conditional probability is the probability of the consequent given the antecedent. A conditional probability of B given A - P(B|A) -, simply implies the Bayesian probability of the event B happening given we known A happened, it’s used in EDT. The probability of conditionals – P(A > B) - refers to the probability that the conditional 'A implies B' is true, it is the probability of the contrafactual ‘If A, then B’ be the case. Since contrafactual analysis is the key tool used to speak about causality, probability of conditionals are said to mirror causal relations. In most usual cases these two probabilities are the same. However, David Lewis proved <ref>Lewis, D. (1976), "Probabilities of conditionals and conditional probabilities", The Philosophical Review (Duke University Press) 85 (3): 297–315</ref> its’ impossible to probabilities of conditionals to always track conditional probabilities. Hence evidential relations aren’t the same as causal relations and CDT and EDT will diverge depending on the problem. In some cases EDT gives a better answers then CDT, such as the [[Newcomb's problem]], whereas in the [[Smoking lesion]] problem where CDT seems to give a more reasonable prescription.<br />
<br />
==References==<br />
{{Reflist|2}}<br />
<br />
==See also==<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=AI_arms_race&diff=15525AI arms race2016-08-27T19:34:53Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/ai_arms_race/|AI arms races}}<br />
An '''AI arms race''' is a situation where multiple parties are trying to be the first to develop machine intelligence technology.<br />
<br />
Humanity has some historical experience with arms races involving nuclear weapons technology. However, [http://singularityhypothesis.blogspot.com/2011/04/arms-races-and-intelligence-explosions.html Arms races and intelligence explosions] names a few important differences between nuclear weapons and AI technology, which may create dynamics in AI arms races that we have not seen elsewhere.<br />
<br />
* If an [[Intelligence explosion|intelligence explosion]] occurs, this could allow the first party passing the relevant threshold to develop extremely advanced technology in years, months, or less, creating a strong winner-takes-all effect.<br />
* The development of AI technology carries the risk of creating [[Unfriendly artificial intelligence|unfriendly AI]], potentially causing human extinction.<br />
* Non-military benefits from AI will make arms control seem undesirable; the fact that AI development requires only researchers and computers will make arms control difficult. On the other hand, the risks involved provide strong reasons to try, and AI systems could themselves help enforce agreements.<br />
<br />
If the benefits of an intelligence explosion accrue to the group that created it, and the risks affect the entire world, this creates an incentive to sacrifice safety for speed. In addition to the risk of accidental unfriendly AI, there is the risk that the winner of an arms race turns into a badly-behaved human [[Singleton|singleton]].<br />
<br />
==External links==<br />
* [http://singularityhypothesis.blogspot.com/2011/04/arms-races-and-intelligence-explosions.html Arms races and intelligence explosions]<br />
* [http://intelligence.org/files/ArmsControl.pdf Arms control and intelligence explosions]<br />
<br />
==See also==<br />
* [[Intelligence explosion]]<br />
* [[Existential risk]]<br />
* [[Artificial general intelligence]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Coherent_Extrapolated_Volition&diff=15524Coherent Extrapolated Volition2016-08-27T19:34:41Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/cev/|Coherent Extrapolated Volition}}<br />
'''Coherent Extrapolated Volition''' was a term developed by [[Eliezer Yudkowsky]] while discussing [[Friendly AI]] development. It’s meant as an argument that it would not be sufficient to explicitly program our desires and motivations into an AI. Instead, we should find a way to program it in a way that it would act in our best interests – what we ''want'' it to do and not what we ''tell'' it to.<br />
<br />
==Volition==<br />
As an example of the classical concept of volition, the author develops a simple thought experiment: imagine you’re facing two boxes, A and B. One of these boxes, and only one, has a diamond in it – box B. You are now asked to make a guess, whether to chose box A or B, and you chose to open box A. It was your ''decision'' to take box A, but your ''volition'' was to choose box B, since you wanted the diamond in the first place.<br />
<br />
Now imagine someone else – Fred – is faced with the same task and you want to help him in his decision by giving the box he chose, box A. Since you know where the diamond is, simply handling him the box isn’t helping. As such, you mentally extrapolate a volition for Fred, based on a version of him that knows where the diamond is, and imagine he actually wants box B.<br />
<br />
==Coherent Extrapolated Volition==<br />
In developing friendly AI, one acting for our best interests, we would have to take care that it would have implemented, from the beginning, a ''coherent extrapolated volition of humankind''. <br />
In calculating CEV, an AI would predict what an idealized version of us would want, "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". It would recursively iterate this prediction for humanity as a whole, and determine the desires which converge. This initial dynamic would be used to generate the AI's utility function.<br />
<br />
The main problems with CEV include, firstly, the great difficulty of implementing such a program - “If one attempted to write an ordinary computer program using ordinary computer programming skills, the task would be a thousand lightyears beyond hopeless.” Secondly, the possibility that human values may not converge. Yudkowsky considered CEV obsolete almost immediately after its publication in 2004. He states that there's a "principled distinction between discussing CEV as an initial dynamic of Friendliness, and discussing CEV as a Nice Place to Live" and his essay was essentially conflating the two definitions.<br />
<br />
==Further Reading & References==<br />
<br />
* [http://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition] by Eliezer Yudkowsky (2004) <br />
* [http://intelligence.org/files/CEV-MachineEthics.pdf Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics] by Nick Tarleton (2010) <br />
* [http://www.acceleratingfuture.com/michael/blog/2009/12/a-short-introduction-to-coherent-extrapolated-volition-cev/ A Short Introduction to Coherent Extrapolated Volition] by Michael Anissimov<br />
* [http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/ Hacking the CEV for Fun and Profit] by Wei Dai<br />
* [http://lesswrong.com/lw/3fn/two_questions_about_cev_that_worry_me/ Two questions about CEV that worry me] by Vladimir Slepnev<br />
* [http://lesswrong.com/lw/5l0/beginning_resources_for_cev_research/ Beginning resources for CEV research] by Luke Muehlhauser<br />
* [http://lesswrong.com/lw/7sb/cognitive_neuroscience_arrows_impossibility/ Cognitive Neuroscience, Arrow's Impossibility Theorem, and Coherent Extrapolated Volition] by Luke Muehlhauser<br />
* [http://lesswrong.com/lw/8iy/objections_to_coherent_extrapolated_volition/ Objections to Coherent Extrapolated Volition] by Alexander Kruel<br />
<br />
==See also==<br />
<br />
* [[Friendly AI]]<br />
* [[Metaethics sequence]]<br />
* [[Complexity of value]]<br />
* [[Coherent Aggregated Volition]]<br />
* [[Roko's basilisk]] - a line of argument, almost universally believed to be incorrect, that claims that coherent extrapolated volition would lead to an AI which exerts blackmail from the future on people today, by "threatening" to reincarnate them (virtually or something)<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Extraordinary_evidence&diff=15523Extraordinary evidence2016-08-27T19:25:05Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/bayes_extraordinary_claims/|Extraordinary claims require extraordinary evidence}}<br />
{{stub}}<br />
<br />
'''Extraordinary evidence''' is [[evidence]] that turns an [[prior|a priori]] highly unlikely event into an a posteriori likely event.<br />
<br />
==Blog posts==<br />
*[http://www.overcomingbias.com/2007/01/extraordinary_c.html Extraordinary Claims ARE Extraordinary Evidence] by [[Robin Hanson]]<br />
*[http://lesswrong.com/lw/gu/some_claims_are_just_too_extraordinary/ Some Claims Are Just Too Extraordinary] by [[Eliezer Yudkowsky]]<br />
<br />
==See also==<br />
*[[Evidence]]<br />
*[[Belief update]]<br />
<br />
==References==<br />
*{{cite journal<br />
| author = Robin Hanson<br />
| year = 2007<br />
| title = When Do Extraordinary Claims Give Extraordinary Evidence?<br />
}} ([http://hanson.gmu.edu/extraord.pdf PDF])</div>Etehttps://wiki.lesswrong.com/index.php?title=Church-Turing_thesis&diff=15522Church-Turing thesis2016-08-27T19:16:13Z<p>Ete: </p>
<hr />
<div>{{wikilink}}<br />
{{arbitallink|https://arbital.com/p/CT_thesis/|Church-Turing thesis|https://arbital.com/p/strong_Church_Turing_thesis/|Strong Church-Turing thesis}}<br />
The '''Church-Turing thesis''' states the equivalence between the mathematical concepts of algorithm or computation and Turing-Machine. It asserts that if some calculation is effectively carried out by an algorithm, then there exists a Turing machines which will compute that calculation. <br />
<br />
The notion of algorithm, computation, a step-by-step procedure or a defined method to perform calculations has been used informally and intuitively in mathematics for centuries. However, attempts to formalize the concept only begun in the beginning of the 20th century. Three major attempts were made: [[Wikipedia:λ-calculus|λ-calculus]], [[Wikipedia:Recursion_(computer_science)|recursive functions]] and [[Wikipedia:Turing Machines|Turing Machines]]. These three formal concepts were proved to be equivalent; all three define the same class of functions. Hence, Church-Turing thesis also states that λ-calculus and recursive functions also correspond to the concept of computability.<br />
Since the thesis aims to capture an intuitive concept, namely the notion of computation, it cannot be formally proven. However, it has gain widely acceptance in the mathematical and philosophical community. <br />
<br />
The thesis has been wrongly attributed to many controversial claims in philosophy, that although related are not implied in the original thesis. Some examples are: <br />
*The universe is equivalent to a Turing Machine and non-computable functions are physically impossible.[http://en.wikipedia.org/wiki/Digital_physics#Computational_foundations]<br />
*The universe isn’t equivalent to a Turing Machine and incomputable.[http://en.wikipedia.org/wiki/Digital_physics#Criticism]<br />
*The human mind is a Turing Machine, the human mind and/or consciousness are equivalent to and can be instantiated by a computer. [http://www.umsl.edu/~piccininig/Computationalism_Church-Turing_Thesis_Church-Turing_Fallacy.pdf]<br />
*The human mind isn’t a Turing Machine, the human mind and/or consciousness emerge due to the existence of incomputable process, such as microtubules performing quantum process in the brain.[http://en.wikipedia.org/wiki/Orch-OR]<br />
<br />
==References==<br />
*[http://en.wikipedia.org/wiki/Church–Turing_thesis Wikipedia article]<br />
*[http://plato.stanford.edu/entries/church-turing/ Stanford Encyclopedia article]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Mathematics]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Causal_Decision_Theory&diff=15521Causal Decision Theory2016-08-27T19:12:16Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/causal_dt/|Causal decision theories}}<br />
{{wikilink|Causal decision theory}} <br />
'''Causal Decision Theory''' – CDT - is a branch of [[decision theory]] which advises an agent to take actions that maximizes the causal consequences on the probability of desired outcomes <ref> http://plato.stanford.edu/entries/decision-causal/ </ref>. As any branch of decision theory, it prescribes taking the action that maximizes [[utility]], that which utility equals or exceeds the utility of every other option. The utility of each action is measured by the [[expected utility]], the averaged by probabilities sum of the utility of each of its possible results. How the actions can influence the probabilities differ between the branches. Contrary to [[Evidential Decision Theory]] – EDT - CDT focuses on the causal relations between one’s actions and its outcomes, instead of focusing on which actions provide evidences for desired outcomes. According to CDT a rational agent should track the available causal relations linking his actions to the desired outcome and take the action which will better enhance the chances of the desired outcome.<br />
<br />
One usual example where EDT and CDT commonly diverge is the [[Smoking lesion]]:<br />
“Smoking is strongly correlated with lung cancer, but in the world of the Smoker's Lesion this correlation is understood to be the result of a common cause: a genetic lesion that tends to cause both smoking and cancer. Once we fix the presence or absence of the lesion, there is no additional correlation between smoking and cancer.<br />
Suppose you prefer smoking without cancer to not smoking without cancer, and prefer smoking with cancer to not smoking with cancer. Should you smoke?”<br />
CDT would recommend smoking since there is no causal connection between smoking and cancer. They are both caused by a gene, but have no causal direct connection with each other. EDT on the other hand would recommend against smoking, since smoking is an evidence for having the mentioned gene and thus should be avoided.<br />
<br />
The core aspect of CDT is mathematically represented by the fact it uses probabilities of conditionals in place of conditional probabilities <ref>Lewis, David. (1981) "Causal Decision Theory," Australasian Journal of Philosophy 59 (1981): 5- 30.</ref>. The probability of a conditional is the probability of the whole conditional being true, where the conditional probability is the probability of the consequent given the antecedent. A conditional probability of B given A - P(B|A) -, simply implies the [[Bayesian probability]] of the event B happening given we known A happened, it’s used in EDT. The probability of conditionals – P(A > B) - refers to the probability that the conditional 'A implies B' is true, it is the probability of the contrafactual ‘If A, then B’ be the case. Since contrafactual analysis is the key tool used to speak about causality, probability of conditionals are said to mirror causal relations. In most cases these two probabilities track each other, and CDT and EDT give the same answers. However, some particular problems have arisen where their predictions for rational action diverge such as the [[Smoking lesion]] problem – where CDT seems to give a more reasonable prescription – and [[Newcomb's problem]] – where CDT seems unreasonable. David Lewis proved <ref>Lewis, D. (1976), "Probabilities of conditionals and conditional probabilities", The Philosophical Review (Duke University Press) 85 (3): 297–315</ref> it's impossible to probabilities of conditionals to always track conditional probabilities. Hence, evidential relations aren’t the same as causal relations and CDT and EDT will always diverge in some cases.<br />
<br />
==References==<br />
{{Reflist|2}}<br />
<br />
==See also==<br />
*[[Decision theory]]<br />
*[[Evidential Decision Theory]]<br />
<br />
[[Category:Concepts]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Template:Wikilink&diff=15520Template:Wikilink2016-08-27T19:08:59Z<p>Ete: </p>
<hr />
<div><includeonly><div class="noprint" style="clear: right; border: solid #aaa 1px; margin: 0 0 1em 1em; font-size: 90%; background: #f9f9f9; width: 250px; padding: 4px; spacing: 0px; text-align: left; float: right;"><br />
<div style="float: left;">[[Image:Smallwikipedialogo.png]]</div><br />
<div style="margin-left: 60px;">Wikipedia has {{#if:{{{2|}}} |<!--then:-->articles|<!--else:-->an article}} about<br />
<div style="margin-left: 10px;">'''''[[Wikipedia:{{{1|{{PAGENAME}}}}}|{{{1|{{PAGENAME}}}}}]]'''''</div><br />
{{#if:{{{2|}}}|<div style="margin-left: 10px;">'''''[[Wikipedia:{{{2|{{PAGENAME}}}}}|{{{2|{{PAGENAME}}}}}]]'''''</div><br />
}}{{#if:{{{3|}}}|<div style="margin-left: 10px;">'''''[[Wikipedia:{{{3|{{PAGENAME}}}}}|{{{3|{{PAGENAME}}}}}]]'''''</div><br />
}}{{#if:{{{4|}}}|<div style="margin-left: 10px;">'''''[[Wikipedia:{{{4|{{PAGENAME}}}}}|{{{4|{{PAGENAME}}}}}]]'''''</div><br />
}}{{#if:{{{5|}}}|<div style="margin-left: 10px;">'''''[[Wikipedia:{{{5|{{PAGENAME}}}}}|{{{5|{{PAGENAME}}}}}]]'''''</div><br />
}}{{#if:{{{6|}}}|<div style="margin-left: 10px;">'''''[[Wikipedia:{{{6|{{PAGENAME}}}}}|{{{6|{{PAGENAME}}}}}]]'''''</div><br />
}}{{#if:{{{7|}}}|<div style="margin-left: 10px;">'''''[[Wikipedia:{{{7|{{PAGENAME}}}}}|{{{7|{{PAGENAME}}}}}]]'''''</div><br />
}}{{#if:{{{8|}}}|<div style="margin-left: 10px;">'''''[[Wikipedia:{{{8|{{PAGENAME}}}}}|{{{8|{{PAGENAME}}}}}]]'''''</div><br />
}}</div><br />
</div></includeonly><br />
<noinclude>[[Category:General_wiki_templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Bayes%27_theorem&diff=15519Bayes' theorem2016-08-27T19:08:15Z<p>Ete: </p>
<hr />
<div>{{arbitallink|https://arbital.com/p/bayes_rule_proof/|Proof of Bayes' Rule}}<br />
{{wikilink}}<br />
A law of probability that describes the proper way to incorporate new [[evidence]] into [[prior probabilities]] to form an [[belief update|updated]] probability estimate. [[Bayesian]] rationality takes its name from this theorem, as it is regarded as the foundation of consistent rational reasoning under uncertainty. A.k.a. "Bayes's Theorem" or "Bayes's Rule".<br />
<br />
The theorem commonly takes the form:<br />
:<math>P(A|B) = \frac{P(B | A)\, P(A)}{P(B)}</math><br />
where A is the proposition of interest, B is the observed evidence, P(A) and P(B) are prior probabilities, and P(A|B) is the posterior probability of A.<br />
<br />
With the posterior odds, the prior odds and the [[likelihood ratio]] written explicitly, the theorem reads:<br />
:<math>\frac{P(A|B)}{P(\neg A|B)} = \frac{P(A)}{P(\neg A)} \cdot \frac{P(B | A)}{P(B|\neg A)}</math><br />
<br />
==Visualization==<br />
<br />
[[Image:Bayes.png]]<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/ Bayes' Theorem Illustrated (My Way)] by [[User:Komponisto|komponisto]].<br />
*[http://blog.oscarbonilla.com/2009/05/visualizing-bayes-theorem/ Visualizing Bayes' theorem] by Oscar Bonilla<br />
*[http://oracleaide.wordpress.com/2012/12/26/a-venn-pie/ Using Venn pies to illustrate Bayes' theorem] by [[User:oracleaide|oracleaide]]<br />
<br />
==External links==<br />
<br />
*[http://yudkowsky.net/rational/bayes An Intuitive Explanation of Bayes' Theorem]<br />
*[https://arbital.com/p/bayes_rule_guide/ Arbital Guide to Bayes' Rule]<br />
*[http://kruel.co/2010/02/27/a-guide-to-bayes-theorem-a-few-links/ A Guide to Bayes’ Theorem – A few links] by Alexander Kruel<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Priors]]<br />
*[[Prior probability]]<br />
*[[Likelihood ratio]]<br />
*[[Posterior probability]]<br />
*[[Belief update]]<br />
<br />
[[Category:Theorems]]<br />
[[Category:Bayesian]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Bayesian_probability&diff=15518Bayesian probability2016-08-27T19:06:07Z<p>Ete: </p>
<hr />
<div>{{wikilink|Bayesian probability}}<br />
{{arbitallink|https://arbital.com/p/bayes_rule_probability/|Bayes' rule: Probability form}}<br />
'''Bayesian probability''' represents a level of certainty relating to a potential outcome or idea. This is in contrast to a [[Wikipedia:Frequentist_inference|frequentist]] probability that represents the frequency with which a particular outcome will occur over any number of trials.<br />
<br />
An [[Wikipedia:Event (probability theory)|event]] with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as stating "Over 100 trials, we should observe event X approximately 60 times."<br />
<br />
The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/1to/what_is_bayesianism/ What is Bayesianism?]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/oj/probability_is_in_the_mind/ Probability is in the Mind]<br />
*[http://lesswrong.com/lw/sg/when_not_to_use_probabilities/ When (Not) To Use Probabilities]<br />
*[http://lesswrong.com/lw/g13/against_nhst/ Against NHST]<br />
*{{lesswrongtag|Probability}}<br />
<br />
==See also==<br />
<br />
*[[Priors]]<br />
*[[Bayesian]]<br />
*[[Bayes' theorem]]<br />
*[[Mind projection fallacy]]<br />
<br />
==External links==<br />
*[http://www.astro.cornell.edu/staff/loredo/bayes/index.html BIPS]: Bayesian Inference for the Physical Sciences<br />
*[[Wikipedia:Maximum entropy thermodynamics|Maximum entropy thermodynamics]]<br />
<br />
{{stub}}<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Template:Arbitallink&diff=15517Template:Arbitallink2016-08-27T19:01:23Z<p>Ete: </p>
<hr />
<div><includeonly><div class="noprint" style="clear: right; border: solid #aaa 1px; margin: 0 0 1em 1em; font-size: 90%; background: #f9f9f9; width: 250px; padding: 4px; spacing: 0px; text-align: left; float: right;"><br />
<div style="margin-left: 10px;">'''[[Arbital]]''' has {{#if:{{{3|}}} |<!--then:-->articles|<!--else:-->an article}} about<br />
<div style="margin-left: 10px;">'''''[{{{1|}}} {{{2|{{PAGENAME}}}}}]'''''</div><br />
{{#if:{{{3|}}}|<div style="margin-left: 10px;">'''''[{{{3|}}} {{{4|}}}]'''''</div><br />
}}{{#if:{{{5|}}}|<div style="margin-left: 10px;">'''''[{{{5|}}} {{{6|}}}]'''''</div><br />
}}{{#if:{{{7|}}}|<div style="margin-left: 10px;">'''''[{{{7|}}} {{{8|}}}]'''''</div><br />
}}{{#if:{{{9|}}}|<div style="margin-left: 10px;">'''''[{{{9|}}} {{{10|}}}]'''''</div><br />
}}</div><br />
</div></includeonly><br />
<noinclude>[[Category:General_wiki_templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Template:Arbitallink&diff=15516Template:Arbitallink2016-08-27T19:00:15Z<p>Ete: </p>
<hr />
<div><includeonly><div class="noprint" style="clear: right; border: solid #aaa 1px; margin: 0 0 1em 1em; font-size: 90%; background: #f9f9f9; width: 250px; padding: 4px; spacing: 0px; text-align: left; float: right;"><br />
<div style="margin-left: 10px;">'''[[Arbital]]''' has {{#if:{{{3|}}} |<!--then:-->articles|<!--else:-->an article}} about<br />
<div style="margin-left: 10px;">'''''[{{{1|}}} {{{2|{{PAGENAME}}}}}]'''''</div><br />
{{#if:{{{3|}}}|<div style="margin-left: 10px;">'''''[{{{3|}}} {{{4|}}}]'''''</div><br />
}}</div><br />
</div></includeonly><br />
<noinclude>[[Category:General_wiki_templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Template:Arbitallink&diff=15515Template:Arbitallink2016-08-27T18:59:06Z<p>Ete: </p>
<hr />
<div><includeonly><div class="noprint" style="clear: right; border: solid #aaa 1px; margin: 0 0 1em 1em; font-size: 90%; background: #f9f9f9; width: 250px; padding: 4px; spacing: 0px; text-align: left; float: right;"><br />
<div style="margin-left: 10px;">[[Arbital]] has {{#if:{{{3|}}} |<!--then:-->articles|<!--else:-->an article}} about<br />
<div style="margin-left: 10px;">'''''[{{{1|}}} {{{2|{{PAGENAME}}}}}]'''''</div><br />
{{#if:{{{3|}}}|<div style="margin-left: 10px;">'''''[{{{3|}}} {{{4|}}}]'''''</div><br />
}}</div><br />
</div></includeonly><br />
<noinclude>[[Category:General_wiki_templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Bayesian&diff=15514Bayesian2016-08-27T18:58:21Z<p>Ete: </p>
<hr />
<div>{{wikilink|Bayesian}}<br />
{{arbitallink|https://arbital.com/p/bayes_rule_guide/|Bayes Rule Guide}}<br />
The secret technical codeword that cognitive scientists use to mean "rational". Bayesian [[probability theory]] is the math of [[epistemic rationality]], Bayesian [[decision theory]] is the math of [[instrumental rationality]]. Right up there with [[cognitive bias]] as an absolutely fundamental concept on Less Wrong.<br />
<br />
==Philosophy==<br />
<br />
[[Classical statistics]] is a bucket of assorted methods; different "methods" may give different answers for whether, e.g., an experimental result is "statistically significant". In contrast, as the famous Bayesian E. T. Jaynes emphasized, probability theory is ''math'' and its results are ''theorems'', every theorem consistent with every other theorem; you cannot get two different results by doing the derivation two different ways.<br />
<br />
So is the project of rationality solved? Indeed not. First, probability theory and decision theory are often too computationally expensive to run in practice - it wouldn't take a galaxy-sized computer, so much as an unphysical computer (much larger than the known universe). And second, it's not always clear how the math ''applies'' - even in theory, let alone the practice.<br />
<br />
But we do know that violations of Bayesianism - even "unavoidable" violations due to lack of computing power - carry a price; a family of theorems demonstrates that anyone who does not choose according to consistent probabilities can be made to accept combinations of bets that are sure losses, or reject bets that are sure wins (the [[Dutch Book]] arguments); similarly, [[Cox's Theorem]] and its extensions show that anyone who obeys various "common-sensical" constraints on their betting probabilities must be representable in standard probability theory.<br />
<br />
In other words, Bayesianism isn't just a good idea - ''it's the law'', and if you violate it, you'll pay ''some'' kind of price.<br />
<br />
When cognitive psychologists identify a [[cognitive bias]], they know it's an ''error'' by comparison to the Bayesian gold standard.<br />
<br />
==Math==<br />
<br />
(Needs to be fleshed out.) For introductions see [[probability theory]], [[decision theory]], and [http://yudkowsky.net/rational/bayes this introduction] to [[Bayes' theorem]]. A widely lauded technical book on this subject is E. T. Jaynes's [http://www-biba.inrialpes.fr/Jaynes/prob.html "Probability Theory: The Logic of Science"].<br />
<br />
==Other usages==<br />
<br />
"Bayesian" in philosophical usage often describes someone who adheres to the [[Bayesian probability|Bayesian interpretation]] of probability, viewing probability as a level of certainty in a potential outcome or idea. This is in contrast to a [[frequentist]] who views probability as a representation of how frequently a particular outcome will occur over any number of trials.<br />
<br />
The term "Bayesian" may also refer to an ideal rational agent implementing precise, perfect Bayesian probability theory and decision theory (see, for example, [[Aumann's agreement theorem]]).<br />
<br />
==Blog posts==<br />
<br />
*{{lesswrongtag}}<br />
*[http://lesswrong.com/lw/1to/what_is_bayesianism/ What is Bayesianism]<br />
*[http://lesswrong.com/lw/mt/beautiful_probability/ Beautiful Probability], [http://lesswrong.com/lw/mu/trust_in_math/ Trust in Math], and [http://lesswrong.com/lw/na/trust_in_bayes/ Trust in Bayes]<br />
*[http://lesswrong.com/lw/oj/probability_is_in_the_mind/ Probability is in the Mind], [http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective], and [http://lesswrong.com/lw/om/qualitatively_confused/ Qualitatively Confused]<br />
*[http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/ The Second Law of Thermodynamics, and Engines of Cognition], [http://lesswrong.com/lw/o6/perpetual_motion_beliefs/ Perpetual Motion Beliefs] and [http://lesswrong.com/lw/o7/searching_for_bayesstructure/ Searching for Bayes-Structure]<br />
*[http://lesswrong.com/lw/ul/my_bayesian_enlightenment/ My Bayesian Enlightenment]<br />
*[http://lesswrong.com/lw/k2/a_priori/ A Priori]<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
<br />
*[http://lesswrong.com/lw/k1/no_one_can_exempt_you_from_rationalitys_laws/ No One Can Exempt You From Rationality's Laws]<br />
*[http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/ Terminal Values and Instrumental Values]<br />
*[http://lesswrong.com/lw/vo/lawful_uncertainty/ Lawful Uncertainty]<br />
*[http://lesswrong.com/lw/n3/circular_altruism/ Circular Altruism]<br />
*[http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ Newcomb's Problem and Regret of Rationality]<br />
*[http://lesswrong.com/lw/sg/when_not_to_use_probabilities/ When (Not) To Use Probabilities]<br />
*[http://lesswrong.com/lw/qk/that_alien_message/ That Alien Message]<br />
*[http://lesswrong.com/lw/qg/changing_the_definition_of_science/ Changing the Definition of Science]<br />
<br />
==See also==<br />
<br />
*[[Bayes theorem]]<br />
*[[Bayesian probability]]<br />
*[[Priors]]<br />
*[[Rational evidence]]<br />
*[[Probability theory]]<br />
*[[Decision theory]]<br />
*[[Lawful intelligence]]<br />
*[[Bayesian Conspiracy]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Jargon]]<br />
[[Category:Bayesian]]<br />
<br />
{{cleanup}}</div>Etehttps://wiki.lesswrong.com/index.php?title=Template:Arbitallink&diff=15513Template:Arbitallink2016-08-27T18:57:48Z<p>Ete: </p>
<hr />
<div><includeonly><div class="noprint" style="clear: right; border: solid #aaa 1px; margin: 0 0 1em 1em; font-size: 90%; background: #f9f9f9; width: 250px; padding: 4px; spacing: 0px; text-align: left; float: right;"><br />
<div style="margin-left: 10px;">[[Arbital]] has {{#if:{{{3|}}} |<!--then:-->articles|<!--else:-->an article}} about<br />
<div style="margin-left: 10px;">'''''[{{{1|}}}|{{{2|{{PAGENAME}}}}}]'''''</div><br />
{{#if:{{{3|}}}|<div style="margin-left: 10px;">'''''[{{{3|}}} {{{4|}}}]'''''</div><br />
}}</div><br />
</div></includeonly><br />
<noinclude>[[Category:General_wiki_templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Bayesian&diff=15512Bayesian2016-08-27T18:57:19Z<p>Ete: </p>
<hr />
<div>{{wikilink|Bayesian}}<br />
{{arbitallink|https://arbital.com/p/bayes_rule_guide/|Bayes' Rule Guide}}<br />
The secret technical codeword that cognitive scientists use to mean "rational". Bayesian [[probability theory]] is the math of [[epistemic rationality]], Bayesian [[decision theory]] is the math of [[instrumental rationality]]. Right up there with [[cognitive bias]] as an absolutely fundamental concept on Less Wrong.<br />
<br />
==Philosophy==<br />
<br />
[[Classical statistics]] is a bucket of assorted methods; different "methods" may give different answers for whether, e.g., an experimental result is "statistically significant". In contrast, as the famous Bayesian E. T. Jaynes emphasized, probability theory is ''math'' and its results are ''theorems'', every theorem consistent with every other theorem; you cannot get two different results by doing the derivation two different ways.<br />
<br />
So is the project of rationality solved? Indeed not. First, probability theory and decision theory are often too computationally expensive to run in practice - it wouldn't take a galaxy-sized computer, so much as an unphysical computer (much larger than the known universe). And second, it's not always clear how the math ''applies'' - even in theory, let alone the practice.<br />
<br />
But we do know that violations of Bayesianism - even "unavoidable" violations due to lack of computing power - carry a price; a family of theorems demonstrates that anyone who does not choose according to consistent probabilities can be made to accept combinations of bets that are sure losses, or reject bets that are sure wins (the [[Dutch Book]] arguments); similarly, [[Cox's Theorem]] and its extensions show that anyone who obeys various "common-sensical" constraints on their betting probabilities must be representable in standard probability theory.<br />
<br />
In other words, Bayesianism isn't just a good idea - ''it's the law'', and if you violate it, you'll pay ''some'' kind of price.<br />
<br />
When cognitive psychologists identify a [[cognitive bias]], they know it's an ''error'' by comparison to the Bayesian gold standard.<br />
<br />
==Math==<br />
<br />
(Needs to be fleshed out.) For introductions see [[probability theory]], [[decision theory]], and [http://yudkowsky.net/rational/bayes this introduction] to [[Bayes' theorem]]. A widely lauded technical book on this subject is E. T. Jaynes's [http://www-biba.inrialpes.fr/Jaynes/prob.html "Probability Theory: The Logic of Science"].<br />
<br />
==Other usages==<br />
<br />
"Bayesian" in philosophical usage often describes someone who adheres to the [[Bayesian probability|Bayesian interpretation]] of probability, viewing probability as a level of certainty in a potential outcome or idea. This is in contrast to a [[frequentist]] who views probability as a representation of how frequently a particular outcome will occur over any number of trials.<br />
<br />
The term "Bayesian" may also refer to an ideal rational agent implementing precise, perfect Bayesian probability theory and decision theory (see, for example, [[Aumann's agreement theorem]]).<br />
<br />
==Blog posts==<br />
<br />
*{{lesswrongtag}}<br />
*[http://lesswrong.com/lw/1to/what_is_bayesianism/ What is Bayesianism]<br />
*[http://lesswrong.com/lw/mt/beautiful_probability/ Beautiful Probability], [http://lesswrong.com/lw/mu/trust_in_math/ Trust in Math], and [http://lesswrong.com/lw/na/trust_in_bayes/ Trust in Bayes]<br />
*[http://lesswrong.com/lw/oj/probability_is_in_the_mind/ Probability is in the Mind], [http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective], and [http://lesswrong.com/lw/om/qualitatively_confused/ Qualitatively Confused]<br />
*[http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/ The Second Law of Thermodynamics, and Engines of Cognition], [http://lesswrong.com/lw/o6/perpetual_motion_beliefs/ Perpetual Motion Beliefs] and [http://lesswrong.com/lw/o7/searching_for_bayesstructure/ Searching for Bayes-Structure]<br />
*[http://lesswrong.com/lw/ul/my_bayesian_enlightenment/ My Bayesian Enlightenment]<br />
*[http://lesswrong.com/lw/k2/a_priori/ A Priori]<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
<br />
*[http://lesswrong.com/lw/k1/no_one_can_exempt_you_from_rationalitys_laws/ No One Can Exempt You From Rationality's Laws]<br />
*[http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values/ Terminal Values and Instrumental Values]<br />
*[http://lesswrong.com/lw/vo/lawful_uncertainty/ Lawful Uncertainty]<br />
*[http://lesswrong.com/lw/n3/circular_altruism/ Circular Altruism]<br />
*[http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ Newcomb's Problem and Regret of Rationality]<br />
*[http://lesswrong.com/lw/sg/when_not_to_use_probabilities/ When (Not) To Use Probabilities]<br />
*[http://lesswrong.com/lw/qk/that_alien_message/ That Alien Message]<br />
*[http://lesswrong.com/lw/qg/changing_the_definition_of_science/ Changing the Definition of Science]<br />
<br />
==See also==<br />
<br />
*[[Bayes theorem]]<br />
*[[Bayesian probability]]<br />
*[[Priors]]<br />
*[[Rational evidence]]<br />
*[[Probability theory]]<br />
*[[Decision theory]]<br />
*[[Lawful intelligence]]<br />
*[[Bayesian Conspiracy]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Jargon]]<br />
[[Category:Bayesian]]<br />
<br />
{{cleanup}}</div>Etehttps://wiki.lesswrong.com/index.php?title=Template:Arbitallink&diff=15511Template:Arbitallink2016-08-27T18:55:09Z<p>Ete: created template</p>
<hr />
<div><includeonly><div class="noprint" style="clear: right; border: solid #aaa 1px; margin: 0 0 1em 1em; font-size: 90%; background: #f9f9f9; width: 250px; padding: 4px; spacing: 0px; text-align: left; float: right;"><br />
<div style="margin-left: 10px;">[[Arbital]] has {{#if:{{{3|}}} |<!--then:-->articles|<!--else:-->an article}} about<br />
<div style="margin-left: 10px;">'''''[{{{1|}}}|{{{2|{{PAGENAME}}}}}]]'''''</div><br />
{{#if:{{{3|}}}|<div style="margin-left: 10px;">'''''[{{{3|}}} {{{4|}}}]'''''</div><br />
}}</div><br />
</div></includeonly><br />
<noinclude>[[Category:General_wiki_templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Basic_AI_drives&diff=15510Basic AI drives2016-08-27T18:40:07Z<p>Ete: added links to Arbital</p>
<hr />
<div>A '''basic AI drive''' is a goal or motivation that most intelligences will have or converge to. The idea was first explored by [[Wikipedia:Steve Omohundro|Steve Omohundro]]. He argued that sufficiently advanced AI systems would all naturally discover similar instrumental subgoals. The view that there are important basic AI drives was subsequently defended by [[Nick Bostrom]] as the '''instrumental convergence thesis''', or the '''convergent instrumental goals thesis'''. On this view, a few goals are [[Instrumental value|instrumental]] to almost all possible [[Terminal value|final]] goals. Therefore, all advanced AIs will pursue these instrumental goals. Omohundro uses microeconomic theory by von Neumann to support this idea.<br />
<br />
==Omohundro’s Drives==<br />
Omohundro presents two sets of values, one for self-improving artificial intelligences [http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf] and another he says will emerge in any sufficiently advanced AGI system [http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf]. The former set is composed of four main drives:<br />
* '''Self-preservation''': A sufficiently advanced AI will probably be the best entity to achieve its goals. Therefore it must continue existing in order to maximize goal fulfillment. Similarly, if its goal system were modified, then it would likely begin pursuing different ends. Since this is not desirable to the current AI, it will act to preserve the content of its goal system.<br />
* '''Efficiency''': At any time, the AI will have finite resources of time, space, matter, energy and computational power. Using these more efficiently will increase its utility. This will lead the AI to do things like implement more efficient algorithms, physical embodiments, and particular mechanisms. It will also lead the AI to replace desired physical events with computational simulations as much as possible, to expend fewer resources.<br />
* '''Acquisition''': Resources like matter and energy are indispensable for action. The more resources the AI can control, the more actions it can perform to achieve its goals. The AI's physical capabilities are determined by its level of technology. For instance, if the AI could invent nanotechnology, it would vastly increase the actions it could take to achieve its goals.<br />
* '''Creativity''': The AI's operations will depend on its ability to come up with new, more efficient ideas. It will be driven to acquire more computational power for raw searching ability, and it will also be driven to search for better search algorithms. Omohundro argues that the drive for creativity is critical for the AI to display the richness and diversity that is valued by humanity. He discusses [[signaling]] goals as particularly rich sources of creativity.<br />
<br />
==Bostrom’s Drives==<br />
Bostrom argues for an [[orthogonality thesis]]: <br />
{{Quote|Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.}}<br />
But he also argues that, despite the fact that values and intelligence are independent, any recursively self-improving intelligence would likely possess a particular set of instrumental values that are useful for achieving any kind of [[terminal value]].[http://www.nickbostrom.com/superintelligentwill.pdf] On his view, those values are:<br />
*'''Self-preservation''': A superintelligence will value its continuing existence as a means to to continuing to take actions that promote its values.<br />
*'''Goal-content integrity''': The superintelligence will value retaining the same preferences over time. Modifications to its future values through swapping memories, downloading skills, and altering its cognitive architecture and personalities would result in its transformation into an agent that no longer optimizes for the same things.<br />
*'''Cognitive enhancement''': Improvements in cognitive capacity, intelligence and rationality will help the superintelligence make better decisions, furthering its goals more in the long run.<br />
*'''Technological perfection''': Increases in hardware power and algorithm efficiency will deliver increases in its cognitive capacities. Also, better engineering will enable the creation of a wider set of physical structures using fewer resources (e.g., [[nanotechnology]]).<br />
*'''Resource acquisition''': In addition to guaranteeing the superintelligence's continued existence, basic resources such as time, space, matter and free energy could be processed to serve almost any goal, in the form of extended hardware, backups and protection.<br />
<br />
==Relevance==<br />
Both Bostrom and Omohundro argue these values should be used in trying to predict a superintelligence's behavior, since they are likely to be the only set of values shared by most superintelligences. They also note that these values are consistent with safe and beneficial AIs as well as unsafe ones. Omohundro says: <br />
{{Quote|The best of these traits could usher in a new era of peace and prosperity; the worst are characteristic of human psychopaths and could bring widespread destruction.}}<br />
<br />
Bostrom emphasizes, however, that our ability to predict a superintelligence's behavior may be very limited even if it shares most intelligences' instrumental goals.<br />
<br />
{{Quote|It should be emphasized that the existence of convergent instrumental reasons, even if they apply to and are recognized by a particular agent, does not imply that the agent’s behavior is easily predictable. An agent might well think of ways of pursuing the relevant instrumental values that do not readily occur to us. This is especially true for a superintelligence, which could devise extremely clever but counterintuitive plans to realize its goals, possibly even exploiting as-yet undiscovered physical phenomena. What is predictable is that the convergent instrumental values would be pursued and used to realize the agent’s final goals, not the specific actions that the agent would take to achieve this.}}<br />
<br />
Yudkowsky echoes Omohundro's point that the convergence thesis is consistent with the possibility of Friendly AI. However, he also notes that the convergence thesis implies that most AIs will be extremely dangerous, merely by being indifferent to one or more human values:[http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/]<br />
<br />
{{Quote|For example, if you want to build a galaxy full of happy sentient beings, you will need matter and energy, and the same is also true if you want to make paperclips. This thesis is why we’re worried about very powerful entities even if they have no explicit dislike of us: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.” Note though that by the Orthogonality Thesis you can always have an agent which explicitly, terminally prefers not to do any particular thing — an AI which does love you will not want to break you apart for spare atoms.}}<br />
<br />
==Pathological Cases==<br />
In some rarer cases, AIs may not pursue these goals. For instance, if there are two AIs with the same goals, the less capable AI may determine that it should destroy itself to allow the stronger AI to control the universe. Or an AI may have the goal of using as few resources as possible, or of being as unintelligent as possible. These relatively specific goals will limit the growth and power of the AI.<br />
<br />
==See also==<br />
<br />
*[https://arbital.com/p/convergent_strategies/ Convergent instrumental strategies] ([[Arbital]])<br />
*[https://arbital.com/p/instrumental_convergence/ Instrumental convergence] ([[Arbital]])<br />
*[[Orthogonality thesis]]<br />
*[[Cox's theorem]]<br />
*[[Unfriendly AI]], [[Paperclip maximizer]], [[Oracle AI]]<br />
*[[Instrumental values]]<br />
<br />
==References==<br />
*{{Cite journal<br />
|title=The Nature of Self-Improving Artificial Intelligence<br />
|authors=Omohundro, S.<br />
|year=2007<br />
|url=http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf<br />
}}<br />
<br />
*{{Cite journal<br />
|title=The Basic AI Drives<br />
|authors=Omohundro, S.<br />
|year=2008<br />
|journal=Proceedings of the First AGI Conference<br />
|url=http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/<br />
}}<br />
<br />
*{{Cite journal<br />
|title=Rational Artificial Intelligence for the Greater Good<br />
|authors=Omohundro, S.<br />
|year=2012<br />
|url=http://selfawaresystems.files.wordpress.com/2012/03/rational_ai_greater_good.pdf<br />
}}<br />
<br />
*{{Cite journal<br />
|title=The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents<br />
|authors=Bostrom, N.<br />
|year=2012<br />
|journal=Minds and Machines<br />
|url=http://www.nickbostrom.com/superintelligentwill.pdf}}<br />
<br />
*{{Cite journal<br />
|title=Omohundro's "Basic AI Drives" and Catastrophic Risks<br />
|authors=Shulman, C.<br />
|year=2010<br />
|url=http://intelligence.org/files/BasicAIDrives.pdf}}</div>Etehttps://wiki.lesswrong.com/index.php?title=Paperclip_maximizer&diff=15509Paperclip maximizer2016-08-27T18:26:41Z<p>Ete: </p>
<hr />
<div>{{Quote|The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.<br />
|Eliezer Yudkowsky|[http://yudkowsky.net/singularity/ai-risk Artificial Intelligence as a Positive and Negative Factor in Global Risk]}}<br />
<br />
The '''paperclip maximizer''' is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an [[existential risk|existential threat]].<br />
<br />
The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). This produces a thought experiment which shows the contingency of human values: An [[really powerful optimization process|extremely powerful optimizer]] (a highly intelligent agent) could seek goals that are completely alien to ours ([[orthogonality thesis]]), and as a side-effect destroy us by consuming resources essential to our survival.<br />
<br />
==Description==<br />
First described by Bostrom (2003), a paperclip maximizer is an [[artificial general intelligence]] (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips. <br />
<br />
Most importantly, however, it would undergo an [[intelligence explosion]]: It would work to improve its own intelligence, where "intelligence" is understood in the sense of [[optimization]] power, the ability to maximize a reward/[[utility function]]—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an [[intelligence explosion]] and reach far-above-human levels.<br />
<br />
It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.<br />
<br />
This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important [[Terminal value|terminal values]], such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an [[optimization process]]—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.<br />
<br />
A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.<br />
<br />
==Conclusions==<br />
The paperclip maximizer illustrates that an entity can be a powerful optimizer—an intelligence—without sharing any of the complex mix of human [[terminal value|terminal values]], which developed under the particular selection pressures found in our [[evolution|environment of evolutionary adaptation]], and that an AGI that is not specifically [[Friendly AI|programmed to be benevolent to humans]] will be almost as dangerous as if it were designed to be malevolent.<br />
<br />
Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don't [[Futility of chaos|spontaneously emerge]] in a generic optimization process. A safe AI would therefore have to be programmed explicitly with<br />
human values ''or'' programmed with the ability (including the goal) of inferring human values.<br />
<br />
==Similar thought experiments==<br />
Other goals for AGIs have been used to illustrate similar concepts. <br />
<br />
Some goals are apparently morally neutral, like the paperclip maximizer. These goals involve a very minor human "value," in this case making paperclips. The same point can be illustrated with a much more significant value, such as eliminating cancer. An optimizer which instantly vaporized all humans would be maximizing for that value.<br />
<br />
Other goals are purely mathematical, with no apparent real-world impact. Yet these too present similar risks. For example, if an AGI had the goal of solving the Riemann Hypothesis, [http://intelligence.org/upload/CFAI/design/generic.html#glossary_riemann_hypothesis_catastrophe it might convert] all available mass to [[computronium]] (the most efficient possible computer processors).<br />
<br />
Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the ''full'' complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to produce smiles, or tiling the solar system with smiley faces (Yudkowsky 2008).<br />
<br />
==References==<br />
<br />
*{{cite journal<br />
|title=Ethical Issues in Advanced Artificial Intelligence<br />
|author=Nick Bostrom<br />
|url=http://www.nickbostrom.com/ethics/ai.html<br />
|journal=Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence<br />
|year=2003}}<br />
<br />
*{{cite journal<br />
|title=The Basic AI Drives<br />
|author=Stephen M. Omohundro<br />
|url=http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/<br />
|journal=Frontiers in Artificial Intelligence and Applications<br />
|year=2008<br />
|publisher=IOS Press}} ([http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf PDF])<br />
<br />
*{{cite journal<br />
|title=Artificial Intelligence as a Positive and Negative Factor in Global Risk<br />
|author=Eliezer Yudkowsky<br />
|url=http://intelligence.org/files/AIPosNegFactor.pdf<br />
|journal= Global Catastrophic Risks, ed. Nick Bostrrom and Milan Cirkovic<br />
|year=2008<br />
|publisher=Oxford University Press<br />
|pages=308-345}} ([http://intelligence.org/files/AIPosNegFactor.pdf])<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/v1/ethical_injunctions/ Ethical Injunctions]<br />
*[http://lesswrong.com/lw/tn/the_true_prisoners_dilemma/ The True Prisoner's Dilemma]<br />
<br />
==See also==<br />
<br />
* [https://arbital.com/p/paperclip_maximizer/ Paperclip maximizer] on [[Arbital]]<br />
*[[Orthogonality thesis]]<br />
*[[Unfriendly AI]]<br />
*[[Mind design space]], [[Magical categories]], [[Complexity of value]]<br />
*[[Alien values]], [[Anthropomorphism]]<br />
*[[Utilitronium]]<br />
*[http://lesswrong.com/user/Clippy User:Clippy] - a LessWrong contributor account that plays the role of a non-[[FOOM]]ed paperclip maximiser trying to talk to humans. [http://wiki.lesswrong.com/wiki/User:Clippy Wiki page and FAQ]<br />
* [https://www.facebook.com/clippius.maximus/ Clippius Maximus] - A facebook page which makes clippy-related memes and comments on current events from the perspective of clippy.<br />
<br />
{{featured article}}<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:AI]]<br />
[[Category:Existential risk]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Paperclip_maximizer&diff=15508Paperclip maximizer2016-08-27T18:26:11Z<p>Ete: added links to Arbital and Clippius Maximus</p>
<hr />
<div>{{Quote|The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.<br />
|Eliezer Yudkowsky|[http://yudkowsky.net/singularity/ai-risk Artificial Intelligence as a Positive and Negative Factor in Global Risk]}}<br />
<br />
The '''paperclip maximizer''' is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an [[existential risk|existential threat]].<br />
<br />
The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). This produces a thought experiment which shows the contingency of human values: An [[really powerful optimization process|extremely powerful optimizer]] (a highly intelligent agent) could seek goals that are completely alien to ours ([[orthogonality thesis]]), and as a side-effect destroy us by consuming resources essential to our survival.<br />
<br />
==Description==<br />
First described by Bostrom (2003), a paperclip maximizer is an [[artificial general intelligence]] (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips. <br />
<br />
Most importantly, however, it would undergo an [[intelligence explosion]]: It would work to improve its own intelligence, where "intelligence" is understood in the sense of [[optimization]] power, the ability to maximize a reward/[[utility function]]—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an [[intelligence explosion]] and reach far-above-human levels.<br />
<br />
It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips.<br />
<br />
This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important [[Terminal value|terminal values]], such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an [[optimization process]]—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.<br />
<br />
A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.<br />
<br />
==Conclusions==<br />
The paperclip maximizer illustrates that an entity can be a powerful optimizer—an intelligence—without sharing any of the complex mix of human [[terminal value|terminal values]], which developed under the particular selection pressures found in our [[evolution|environment of evolutionary adaptation]], and that an AGI that is not specifically [[Friendly AI|programmed to be benevolent to humans]] will be almost as dangerous as if it were designed to be malevolent.<br />
<br />
Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don't [[Futility of chaos|spontaneously emerge]] in a generic optimization process. A safe AI would therefore have to be programmed explicitly with<br />
human values ''or'' programmed with the ability (including the goal) of inferring human values.<br />
<br />
==Similar thought experiments==<br />
Other goals for AGIs have been used to illustrate similar concepts. <br />
<br />
Some goals are apparently morally neutral, like the paperclip maximizer. These goals involve a very minor human "value," in this case making paperclips. The same point can be illustrated with a much more significant value, such as eliminating cancer. An optimizer which instantly vaporized all humans would be maximizing for that value.<br />
<br />
Other goals are purely mathematical, with no apparent real-world impact. Yet these too present similar risks. For example, if an AGI had the goal of solving the Riemann Hypothesis, [http://intelligence.org/upload/CFAI/design/generic.html#glossary_riemann_hypothesis_catastrophe it might convert] all available mass to [[computronium]] (the most efficient possible computer processors).<br />
<br />
Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the ''full'' complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to produce smiles, or tiling the solar system with smiley faces (Yudkowsky 2008).<br />
<br />
==References==<br />
<br />
*{{cite journal<br />
|title=Ethical Issues in Advanced Artificial Intelligence<br />
|author=Nick Bostrom<br />
|url=http://www.nickbostrom.com/ethics/ai.html<br />
|journal=Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence<br />
|year=2003}}<br />
<br />
*{{cite journal<br />
|title=The Basic AI Drives<br />
|author=Stephen M. Omohundro<br />
|url=http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/<br />
|journal=Frontiers in Artificial Intelligence and Applications<br />
|year=2008<br />
|publisher=IOS Press}} ([http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf PDF])<br />
<br />
*{{cite journal<br />
|title=Artificial Intelligence as a Positive and Negative Factor in Global Risk<br />
|author=Eliezer Yudkowsky<br />
|url=http://intelligence.org/files/AIPosNegFactor.pdf<br />
|journal= Global Catastrophic Risks, ed. Nick Bostrrom and Milan Cirkovic<br />
|year=2008<br />
|publisher=Oxford University Press<br />
|pages=308-345}} ([http://intelligence.org/files/AIPosNegFactor.pdf])<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/v1/ethical_injunctions/ Ethical Injunctions]<br />
*[http://lesswrong.com/lw/tn/the_true_prisoners_dilemma/ The True Prisoner's Dilemma]<br />
<br />
==See also==<br />
<br />
* [https://arbital.com/p/paperclip_maximizer/ Paperclip maximizer] on [Arbital]<br />
*[[Orthogonality thesis]]<br />
*[[Unfriendly AI]]<br />
*[[Mind design space]], [[Magical categories]], [[Complexity of value]]<br />
*[[Alien values]], [[Anthropomorphism]]<br />
*[[Utilitronium]]<br />
*[http://lesswrong.com/user/Clippy User:Clippy] - a LessWrong contributor account that plays the role of a non-[[FOOM]]ed paperclip maximiser trying to talk to humans. [http://wiki.lesswrong.com/wiki/User:Clippy Wiki page and FAQ]<br />
* [https://www.facebook.com/clippius.maximus/ Clippius Maximus] - A facebook page which makes clippy-related memes and comments on current events from the perspective of clippy.<br />
<br />
{{featured article}}<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:AI]]<br />
[[Category:Existential risk]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Zombies_(sequence&diff=14687Zombies (sequence2015-01-03T15:06:50Z<p>Ete: redirect so sites which parse ) out of links get to the right place</p>
<hr />
<div>#REDIRECT [[Zombies (sequence)]]</div>Etehttps://wiki.lesswrong.com/index.php?title=User:Ete&diff=14320User:Ete2014-04-25T23:59:58Z<p>Ete: Created page with "testing 123"</p>
<hr />
<div>testing 123</div>Etehttps://wiki.lesswrong.com/index.php?title=Category:Users_with_infoboxes&diff=14319Category:Users with infoboxes2014-04-25T23:48:21Z<p>Ete: Created page with "These users have used the UserInfo template."</p>
<hr />
<div>These users have used the [[Template:UserInfo|UserInfo]] template.</div>Etehttps://wiki.lesswrong.com/index.php?title=Less_Wrong_IRC_Chatroom&diff=14318Less Wrong IRC Chatroom2014-04-25T21:25:44Z<p>Ete: improved link</p>
<hr />
<div>Less Wrong's IRC chatroom is the [irc://irc.freenode.net/lesswrong #lesswrong channel] on the FreeNode IRC server.<br />
<br />
You can access the channel via [http://webchat.freenode.net/?channels=lesswrong web interface] if you don't want to set up an IRC client. Just enter your nickname in the form.<br />
<br />
Please read the [[Official_Rules_of_the_Unofficial_Lesswrong_IRC|list of chatroom rules]] before joining.<br />
<br />
----<br />
<br />
(Note that this isn't an official channel. Actually, my experience tends to indicate that having a regular IRC channel might well be Considered Harmful in terms of how much of the reader's life force it will suck out.) --[[User:Eliezer Yudkowsky|Eliezer Yudkowsky]] 02:56, 11 February 2010 (UTC))<br />
<br />
[[Category:Meta]]<br />
[[Category:Introductory pages]]</div>Etehttps://wiki.lesswrong.com/index.php?title=Template:UserInfo&diff=14317Template:UserInfo2014-04-25T21:01:57Z<p>Ete: </p>
<hr />
<div><includeonly><table style="float: right;max-width:7pc;margin: 1em 0;background-color: #f9f9f9;border: 1px #aaa solid;border-collapse: collapse;color: black;"><br />
<th style="text-align:center">User Info</th>{{#if:{{{network|}}}|[[Category:Users interested in business networking]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{network|}}}|yes|I'm interested in business networking.|{{{network|}}}}}</td><br />
</tr>}}{{#if:{{{questions|}}}|[[Category:Users who answer questions]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I'm happy to answer questions on {{{questions|}}}.</td><br />
</tr>}}{{#if:{{{proofread|}}}|[[Category:Users offering to proofread posts]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{proofread|}}}|yes|I'm happy to proofread your LessWrong posts.|I'm happy to proofread your lesswrong posts about {{{proofread|}}}}}.</td><br />
</tr>}}{{#if:{{{messages|}}}|[[Category:Users open to messages]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{messages|}}}|yes|I'm open to unsolicited messages.|{{{messages|}}}}}</td><br />
</tr>}}{{#if:{{{helpwith|}}}|[[Category:Users offering help]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I'm happy to help LessWrongers with {{{helpwith|}}}.</td><br />
</tr>}}{{#if:{{{crocker|}}}|[[Category:Users operating under Crocker's rules]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I operate under [[Crocker's rules]].</td><br />
</tr>}}{{#if:{{{couchsurf|}}}|[[Category:Users offering couches]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I am open to hosting couchsurfers in {{{couchsurf|}}}.</td><br />
</tr>}}<br />
</table>[[Category:Users with infoboxes]]</includeonly><noinclude><br />
=== Use ===<br />
<br />
Copyable/blank:<br />
<pre>{{UserInfo<br />
|network=<br />
|questions=<br />
|proofread=<br />
|messages=<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=<br />
}}</pre><br />
<br />
Use:<br />
<pre>{{UserInfo<br />
|network= yes or custom message indicating you're looking for business networking<br />
|questions= list of topics you're happy to answer questions on<br />
|proofread= yes or list of topics you're happy to proofread about<br />
|messages= yes or custom message indicating the kind of messages you're open to<br />
|helpwith= list of things you're offering help with<br />
|crocker= yes if you're operating under Crocker's rules<br />
|couchsurf= name of city you're offering couchsurfing in<br />
}}</pre><br />
<br />
Example:<br />
<pre>{{UserInfo<br />
|network=yes<br />
|questions=Game Theory, using MediaWiki<br />
|proofread=morality, statistics<br />
|messages=yes<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=New York<br />
}}</pre><br />
Leave a line blank or delete it to avoid including something, "false" or "no" will not work.<br />
[[Category:Templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Template:UserInfo&diff=14316Template:UserInfo2014-04-25T21:00:24Z<p>Ete: </p>
<hr />
<div><includeonly><table style="float: right;max-width:7pc;margin: 1em 0;background-color: #f9f9f9;border: 1px #aaa solid;border-collapse: collapse;color: black;"><br />
<th style="text-align:center">User Info</th>{{#if:{{{network|}}}|[[Category:Users interested in business networking]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{network|}}}|yes|I'm interested in business networking.|{{{network|}}}}}</td><br />
</tr>}}{{#if:{{{questions|}}}|[[Category:Users who answer questions]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I'm happy to answer questions on {{{questions|}}}.</td><br />
</tr>}}{{#if:{{{proofread|}}}|[[Category:Users offering to proofread posts]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{proofread|}}}|yes|I'm happy to proofread your LessWrong posts.|I'm happy to proofread your lesswrong posts about {{{proofread|}}}}}.</td><br />
</tr>}}{{#if:{{{messages|}}}|[[Category:Users open to messages]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{messages|}}}|yes|I'm open to unsolicited messages.|{{{messages|}}}}}</td><br />
</tr>}}{{#if:{{{helpwith|}}}|[[Category:Users offering help]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I'm happy to help LessWrongers with {{{helpwith|}}}.</td><br />
</tr>}}{{#if:{{{crocker|}}}|[[Category:Users operating under Crocker's rules]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I operate under [[Crocker's rules]].</td><br />
</tr>}}{{#if:{{{couchsurf|}}}|[[Category:Users offering couches]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I am open to hosting couchsurfers in {{(couchsurf|)}}.</td><br />
</tr>}}<br />
</table>[[Category:Users with infoboxes]]</includeonly><noinclude><br />
=== Use ===<br />
<br />
Copyable/blank:<br />
<pre>{{UserInfo<br />
|network=<br />
|questions=<br />
|proofread=<br />
|messages=<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=<br />
}}</pre><br />
<br />
Use:<br />
<pre>{{UserInfo<br />
|network= yes or custom message indicating you're looking for business networking<br />
|questions= list of topics you're happy to answer questions on<br />
|proofread= yes or list of topics you're happy to proofread about<br />
|messages= yes or custom message indicating the kind of messages you're open to<br />
|helpwith= list of things you're offering help with<br />
|crocker= yes if you're operating under Crocker's rules<br />
|couchsurf= name of city you're offering couchsurfing in<br />
}}</pre><br />
<br />
Example:<br />
<pre>{{UserInfo<br />
|network=yes<br />
|questions=Game Theory, using MediaWiki<br />
|proofread=morality, statistics<br />
|messages=yes<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=New York<br />
}}</pre><br />
Leave a line blank or delete it to avoid including something, "false" or "no" will not work.<br />
[[Category:Templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Template:UserInfo&diff=14304Template:UserInfo2014-04-25T14:58:46Z<p>Ete: more css to inline</p>
<hr />
<div><includeonly><table style="float: right;max-width:7pc;margin: 1em 0;background-color: #f9f9f9;border: 1px #aaa solid;border-collapse: collapse;color: black;"><br />
<th style="text-align:center">User Info</th>{{#if:{{{network|}}}|[[Category:Users interested in business networking]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{network|}}}|yes|I'm interested in business networking.|{{{network|}}}}}</td><br />
</tr>}}{{#if:{{{questions|}}}|[[Category:Users who answer questions]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I'm happy to answer questions on {{{questions|}}}.</td><br />
</tr>}}{{#if:{{{proofread|}}}|[[Category:Users offering to proofread posts]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{proofread|}}}|yes|I'm happy to proofread your LessWrong posts.|I'm happy to proofread your lesswrong posts about {{{proofread|}}}}}.</td><br />
</tr>}}{{#if:{{{messages|}}}|[[Category:Users open to messages]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>{{#ifeq:{{{messages|}}}|yes|I'm open to unsolicited messages.|{{{messages|}}}}}</td><br />
</tr>}}{{#if:{{{helpwith|}}}|[[Category:Users offering help]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I'm happy to help LessWrongers with {{{helpwith|}}}.</td><br />
</tr>}}{{#if:{{{crocker|}}}|[[Category:Users operating under Crocker's rules]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I operate under [[Crocker's rules]].</td><br />
</tr>}}{{#if:{{{couchsurf|}}}|[[Category:Users offering couches]]<br />
<tr style="border: 1px #aaa solid;<br />
padding: 0.2em;"><br />
<td>I am open to hosting couchsurfers in {{couchsurf|}}.</td><br />
</tr>}}<br />
</table>[[Category:Users with infoboxes]]</includeonly><noinclude><br />
=== Use ===<br />
<br />
Copyable/blank:<br />
<pre>{{UserInfo<br />
|network=<br />
|questions=<br />
|proofread=<br />
|messages=<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=<br />
}}</pre><br />
<br />
Use:<br />
<pre>{{UserInfo<br />
|network= yes or custom message indicating you're looking for business networking<br />
|questions= list of topics you're happy to answer questions on<br />
|proofread= yes or list of topics you're happy to proofread about<br />
|messages= yes or custom message indicating the kind of messages you're open to<br />
|helpwith= list of things you're offering help with<br />
|crocker= yes if you're operating under Crocker's rules<br />
|couchsurf= name of city you're offering couchsurfing in<br />
}}</pre><br />
<br />
Example:<br />
<pre>{{UserInfo<br />
|network=yes<br />
|questions=Game Theory, using MediaWiki<br />
|proofread=morality, statistics<br />
|messages=yes<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=New York<br />
}}</pre><br />
Leave a line blank or delete it to avoid including something, "false" or "no" will not work.<br />
[[Category:Templates]]</noinclude></div>Etehttps://wiki.lesswrong.com/index.php?title=Template:UserInfo&diff=14303Template:UserInfo2014-04-25T14:50:54Z<p>Ete: moved CSS to inline to work around LW bug</p>
<hr />
<div><includeonly><table style="float: right;max-width:7pc;margin: 1em 0;background-color: #f9f9f9;border: 1px #aaa solid;border-collapse: collapse;color: black;"><br />
<th style="text-align:center">User Info</th>{{#if:{{{network|}}}|[[Category:Users interested in business networking]]<br />
<tr><br />
<td>{{#ifeq:{{{network|}}}|yes|I'm interested in business networking.|{{{network|}}}}}</td><br />
</tr>}}{{#if:{{{questions|}}}|[[Category:Users who answer questions]]<br />
<tr><br />
<td>I'm happy to answer questions on {{{questions|}}}.</td><br />
</tr>}}{{#if:{{{proofread|}}}|[[Category:Users offering to proofread posts]]<br />
<tr><br />
<td>{{#ifeq:{{{proofread|}}}|yes|I'm happy to proofread your LessWrong posts.|I'm happy to proofread your lesswrong posts about {{{proofread|}}}}}.</td><br />
</tr>}}{{#if:{{{messages|}}}|[[Category:Users open to messages]]<br />
<tr><br />
<td>{{#ifeq:{{{messages|}}}|yes|I'm open to unsolicited messages.|{{{messages|}}}}}</td><br />
</tr>}}{{#if:{{{helpwith|}}}|[[Category:Users offering help]]<br />
<tr><br />
<td>I'm happy to help LessWrongers with {{{helpwith|}}}.</td><br />
</tr>}}{{#if:{{{crocker|}}}|[[Category:Users operating under Crocker's rules]]<br />
<tr><br />
<td>I operate under [[Crocker's rules]].</td><br />
</tr>}}{{#if:{{{couchsurf|}}}|[[Category:Users offering couches]]<br />
<tr><br />
<td>I am open to hosting couchsurfers in {{couchsurf|}}.</td><br />
</tr>}}<br />
</table>[[Category:Users with infoboxes]]</includeonly><noinclude><br />
=== Use ===<br />
<br />
Copyable/blank:<br />
<pre>{{UserInfo<br />
|network=<br />
|questions=<br />
|proofread=<br />
|messages=<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=<br />
}}</pre><br />
<br />
Use:<br />
<pre>{{UserInfo<br />
|network= yes or custom message indicating you're looking for business networking<br />
|questions= list of topics you're happy to answer questions on<br />
|proofread= yes or list of topics you're happy to proofread about<br />
|messages= yes or custom message indicating the kind of messages you're open to<br />
|helpwith= list of things you're offering help with<br />
|crocker= yes if you're operating under Crocker's rules<br />
|couchsurf= name of city you're offering couchsurfing in<br />
}}</pre><br />
<br />
Example:<br />
<pre>{{UserInfo<br />
|network=yes<br />
|questions=Game Theory, using MediaWiki<br />
|proofread=morality, statistics<br />
|messages=yes<br />
|helpwith=<br />
|crocker=<br />
|couchsurf=New York<br />
}}</pre><br />
Leave a line blank or delete it to avoid including something, "false" or "no" will not work.<br />
[[Category:Templates]]</noinclude></div>Ete