https://wiki.lesswrong.com/api.php?action=feedcontributions&user=Pedrochaves&feedformat=atomLesswrongwiki - User contributions [en]2021-05-08T14:08:42ZUser contributionsMediaWiki 1.31.12https://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11278Bayesian decision theory2012-10-22T17:36:30Z<p>Pedrochaves: /* Bayesian reasoning in everyday life */</p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a [[decision theory]] which is informed by [[Bayesian probability]]. It is a statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the [[expected value]] of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to make decisions which minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can also use Bayesian methods. Besides these explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. Such systems, like the human brain, seem to construct Bayesian models of their environment and then use these models to make decisions. Such models and distributions are constantly being updated and reconfigured according to feedback from the environment.<br />
<br />
==Bayesian reasoning in everyday life==<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts and decisions a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals. This process can involve [http://lesswrong.com/lw/8uj/compressing_reality_to_math/ applying math to reality] in a simplified and extremely useful way:<br />
<br />
You receive a phone call from a friend inviting you to do something tomorrow - playing a board game, as you usually do. The question arises: where to play? At your apartment or ouside in the park? <br />
<br />
You check the weather forecast and conclude there's a 50% chance of rain. Since playing the game implies being stuck in the place you chose, you add another decision to your model - to just talk instead of playing. That way you can move according to the weather.<br />
<br />
If you attribute different preferences to both the activity and the location while considering the influence of the probability of raining, you can build a model to help you decide and clarify the problem. That way you can define both variables in a more informed and balanced mode, thus being able to make better decisions.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11229Bayesian decision theory2012-10-20T17:31:24Z<p>Pedrochaves: /* Bayesian reasoning in everyday life */</p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a [[decision theory]] which is informed by [[Bayesian probability]]. It is a statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the [[expected value]] of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to make decisions which minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can also use Bayesian methods. Besides these explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. Such systems, like the human brain, seem to construct Bayesian models of their environment and then use these models to make decisions. Such models and distributions are constantly being updated and reconfigured according to feedback from the environment.<br />
<br />
==Bayesian reasoning in everyday life==<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts and decisions a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals. This process can involve [http://lesswrong.com/lw/8uj/compressing_reality_to_math/ applying math to reality] in a simplified and extremely useful way:<br />
<br />
You receive a phone call from a friend inviting you to do something tomorrow - playing a board game, as you usually do. The question arises: where to play? At your apartment or ouside in the park? <br />
<br />
You check the weather forecast and conclude there's a 50% chance of rain. Since playing the game implies being stuck in the place you chose, you add another decision to your model - to just talk instead of playing. That way you can move according to the weather.<br />
<br />
If you attribute different preferences to both the activity and the location while considering the influence of the probability of raining, you can build a model to help you decide and clarify the problem. That way you can define both variables in a more informed and balanced mode, making better decisions.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11227Bayesian decision theory2012-10-20T17:30:12Z<p>Pedrochaves: /* Bayesian reasoning in everyday life */</p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a [[decision theory]] which is informed by [[Bayesian probability]]. It is a statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the [[expected value]] of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to make decisions which minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can also use Bayesian methods. Besides these explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. Such systems, like the human brain, seem to construct Bayesian models of their environment and then use these models to make decisions. Such models and distributions are constantly being updated and reconfigured according to feedback from the environment.<br />
<br />
==Bayesian reasoning in everyday life==<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts and decisions a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals. This process can be seen as [http://lesswrong.com/lw/8uj/compressing_reality_to_math/ applying math to reality] in a simplified and extremely useful way:<br />
<br />
You receive a phone call from a friend inviting you to do something tomorrow - playing a board game, as you usually do. The question arises: where to play? At your apartment or ouside in the park? <br />
<br />
You check the weather forecast and conclude there's a 50% chance of rain. Since playing the game implies being stuck in the place you chose, you add another decision to your model - to just talk instead of playing. That way you can move according to the weather.<br />
<br />
If you attribute different preferences to both the activity and the location while considering the influence of the probability of raining, you can build a model to help you decide and clarify the problem. That way you can define both variables in a more informed and balanced mode, making better decisions.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11226Bayesian decision theory2012-10-20T17:29:15Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a [[decision theory]] which is informed by [[Bayesian probability]]. It is a statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the [[expected value]] of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to make decisions which minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can also use Bayesian methods. Besides these explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. Such systems, like the human brain, seem to construct Bayesian models of their environment and then use these models to make decisions. Such models and distributions are constantly being updated and reconfigured according to feedback from the environment.<br />
<br />
==Bayesian reasoning in everyday life==<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts and decisions a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals. This process can be seen as [http://lesswrong.com/lw/8uj/compressing_reality_to_math/ applying math to reality] in a simplified and extremely useful way:<br />
<br />
You receive a phone call from a friend inviting you to do something tomorrow - playing a board game, as you usually do. The question arises: where to play? At your apartment or ouside in the park? <br />
<br />
You check the weather forecast and conclude there's a 50% chance of rain. Since playing the game implies being stuck in the place you chose, you add another decision to your model - just talk instead of playing. That way you can move according to the weather.<br />
<br />
If you attribute different preferences to both the activity and the location while considering the influence of the probability of raining, you can build a model to help you decide and clarify the problem. That way you can define both variables in a more informed and balanced mode.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11225Priors2012-10-20T17:12:56Z<p>Pedrochaves: /* Updating prior probabilities */</p>
<hr />
<div>In the context of [[Bayes's Theorem]], '''Priors''' refer generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
==Examples==<br />
<br />
Suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
As a real life example, consider two leaders from different political parties. Each one has his own beliefs about social organization and the roles of people and government in society. These differences can be attributed to a wide range of factors, from genetic variability to education influence in their personalities and condition the politics and laws they want to implement. However, neither can show that his beliefs are better than those of the other, unless he can show that his priors were generated by sources which track reality better<ref>Robin Hanson (2006). "Uncommon Priors Require Origin Disputes". Theory and Decision 61 (4) 319–328. http://hanson.gmu.edu/prior.pdf</ref>.<br />
<br />
==Updating prior probabilities==<br />
It's important to notice that piors represent a commitment to a certain belief. That is, as seen in this [http://lesswrong.com/lw/ear/whats_the_value_of_information/7anb Less Wrong discussion], you can't ''shift'' your prior. What happens is that, after being presented with the evidence, you update your prior probability, thus actually becoming a posterior probability.<br />
<br />
It should be noticed, however, that it can make sense to informally talk about updating priors when dealing with a sequence of inferences. In such cases, posterior probability happens to become a prior for the next inference, so it can make it easier to refer to it in that way.<br />
<br />
==References==<br />
<br />
<references /><br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11224Priors2012-10-20T17:12:15Z<p>Pedrochaves: </p>
<hr />
<div>In the context of [[Bayes's Theorem]], '''Priors''' refer generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
==Examples==<br />
<br />
Suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
As a real life example, consider two leaders from different political parties. Each one has his own beliefs about social organization and the roles of people and government in society. These differences can be attributed to a wide range of factors, from genetic variability to education influence in their personalities and condition the politics and laws they want to implement. However, neither can show that his beliefs are better than those of the other, unless he can show that his priors were generated by sources which track reality better<ref>Robin Hanson (2006). "Uncommon Priors Require Origin Disputes". Theory and Decision 61 (4) 319–328. http://hanson.gmu.edu/prior.pdf</ref>.<br />
<br />
==Updating prior probabilities==<br />
It's important to notice that piors represent a commitment to a certain belief. That is, as seen in this [http://lesswrong.com/lw/ear/whats_the_value_of_information/7anb Less Wrong discussion], you can't ''shift'' your prior. What happens is that, after being presented with the evidence, you update your prior probability, thus actually becoming a posterior probability.<br />
<br />
It should be noticed, however, that it can make sense to informally talk about updating priors when dealing with a sequence of inferences. In such cases, posterior probability actually becomes a prior for the next inference, so it can make it easier to refer to it in that way.<br />
<br />
==References==<br />
<br />
<references /><br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11223Priors2012-10-20T17:11:09Z<p>Pedrochaves: </p>
<hr />
<div>In the context of [[Bayes's Theorem]], '''Priors''' refer generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
==Examples==<br />
<br />
Suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
As a real life example, consider two leaders from different political parties. Each one has his own beliefs about social organization and the roles of people and government in society. These differences can be attributed to a wide range of factors, from genetic variability to education influence in their personalities and condition the politics and laws they want to implement. However, neither can show that his beliefs are better than those of the other, unless he can show that his priors were generated by sources which track reality better<ref>Robin Hanson (2006). "Uncommon Priors Require Origin Disputes". Theory and Decision 61 (4) 319–328. http://hanson.gmu.edu/prior.pdf</ref>.<br />
<br />
<br />
==Updating prior probabilities==<br />
It's important to notice that piors represent a commitment to a certain belief. That is, as [http://lesswrong.com/lw/ear/whats_the_value_of_information/7anb Cyan puts it], you can't ''shift'' your prior. What happens is that, after being presented with the evidence, you update your prior probability, thus actually becoming a posterior probability.<br />
<br />
It should be noticed, however, that it can make sense to informally talk about updating priors when dealing with a sequence of inferences. In such cases, posterior probability actually becomes a prior for the next inference, so it can make it easier to refer to it in that way.<br />
<br />
<br />
==References==<br />
<br />
<references /><br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11147Bayesian decision theory2012-10-17T15:17:13Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a decision theory which is informed by [[Bayesian probability]]. It is a fundamental statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to make decisions which minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. Such systems, like the human brain, seem capable of mantaining internal probabilistic models of reality, which are used to make decisions and take actions through the motor and behavioral outputs. Such models and distributions are then constantly being updated and reconfigured according to the feedback obtained by the sensorial input.<br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11146Bayesian decision theory2012-10-17T15:12:47Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a decision theory which is informed by [[Bayesian probability]]. It is a fundamental statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning. <br />
<br />
A Bayesian decision rule is one that consistently tries to make decisions which minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain|nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. <br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11145Priors2012-10-17T15:04:21Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
A real life example may come in hand to better understand the consequences of the priors on the reasoning about any subject. Consider two leaders from different political parties - each one has his own beliefs about social organization and the roles of people and government in the society. This differences can be attributed to a wide range of factors, from genetic variability to education influence in their personalities and condition the politics and laws they want to implement. However, both leaders - and, most importantly, the voters - should note that neither has reason to belief his reasoning is better than the other, unless he can ''demonstrate'' that his priors lead to a better political model and improvements in the society.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This specific term usually refers to a prior already based on considerable evidence - for example when we estimate the number of red balls after doing 100 similar experiments or hearing about how the box was created.<br />
<br />
As a complementary example, suppose there are a hundred boxes, one of which contains a diamond - and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then represents the introduction of [[evidence]], with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. In conclusion, the term prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11095Bayesian decision theory2012-10-14T16:55:40Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a decision theory which is informed by [[Bayesian probability]]. It is a fundamental statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. By contrast, for a frequentist under classical probability interpretation, it would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain|nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. <br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11094Bayesian decision theory2012-10-14T16:43:23Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a decision theory which is informed by [[Bayesian probability]]. It is a fundamental statistical system that tries to quantify the tradeoff between various decisions making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. By contrast, for a frequentist under classical probability interpretation, it would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain|nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. <br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11093Bayesian decision theory2012-10-14T16:39:17Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to a decision theory which is informed by [[Bayesian probability]]. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. By contrast, for a frequentist under classical probability interpretation, it would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementations, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain|nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. <br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11089Bayesian decision theory2012-10-13T20:25:08Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to decision theory which is informed by [[Bayesian probability]]. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usyally referred to as estimators.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. By contrast, for a frequentist under classical probability interpretation, it would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementantions, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain|nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. <br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11088Bayesian decision theory2012-10-13T20:20:43Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to decision theory which is informed by [[Bayesian probability]]. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. By contrast, for a frequentist under classical probability interpretation, it would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementantions, it also has been observed that naturally evolved [http://en.wikipedia.org/wiki/Bayesian_brain|nervous systems] mirror these probabilistic methods when they adapt to an uncertain environment. <br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==Further Reading & References==<br />
*Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611<br />
*Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley. ISBN 0-471-92416-4. MR 1274699<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Bayesian_decision_theory&diff=11087Bayesian decision theory2012-10-13T20:19:22Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink}}<br />
'''Bayesian decision theory''' refers to decision theory which is informed by [[Bayesian probability]]. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information.<br />
<br />
Consider any kind of probability distribution - such as the weather for tomorrow (encompassing several variables such as humidity, rain or temperature). From a Bayesian perspective, that represents a [[priors|prior]] distribution. That is, it represents how we believe ''today'' the weather is going to be tomorrow. By contrast, for a frequentist under classical probability interpretation, it would be a simple distribution function with no special meaning. A Bayesian decision rule is one that consistently tries to minimize the risk of the probability distribution. Such risk can be seen as the difference between the prior beliefs and the real outcomes - the prediction and the actual weather tomorrow.<br />
<br />
Computer algorithms such as those studied in the subject of [[Machine learning]] can use Bayesian methods also. Besides this explicit implementantions, it also has been observed that naturally evolved [[http://en.wikipedia.org/wiki/Bayesian_brain|nervous systems]] mirror these probabilistic methods when they adapt to an uncertain environment. <br />
<br />
What Less Wrong refers to as [[Rationality]] is an effort to make conscious thoughts a better approximation of Bayesian decision theory, in order to better understand the world and achieve one's goals.<br />
<br />
==See also==<br />
<br />
*[[Bayesian probability]]<br />
*[[Decision theory]]<br />
<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11085Priors2012-10-13T16:53:34Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This specific term usually refers to a prior already based on considerable evidence - for example when we estimate the number of red balls after doing 100 similar experiments or hearing about how the box was created.<br />
<br />
As a complementary example, suppose there are a hundred boxes, one of which contains a diamond - and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then represents the introduction of [[evidence]], with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. In conclusion, the term prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Prior_probability&diff=11076Prior probability2012-10-13T01:29:21Z<p>Pedrochaves: Redirected page to Prior Probability</p>
<hr />
<div>#REDIRECT [[Prior Probability]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=11075Fun theory2012-10-13T01:27:35Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:<br />
<br />
<blockquote><br />
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be." <br />
</blockquote><br />
<br />
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
==Complex values and Fun Theory's solution==<br />
A key insight of Fun Theory, in its current embryonic form, is that ''eudaimonia'' - the classical framework where happinness is the ultimate human goal - is [[Complexity of value|complicated]]. That is, there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.<br />
<br />
It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will [[wireheading|"wirehead"]] - stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent, then the human future will likely be very unpleasant. <br />
<br />
Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.<br />
<br />
Fun Theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall in a classical Utopian future.<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Computing_overhang&diff=11074Computing overhang2012-10-13T01:25:13Z<p>Pedrochaves: </p>
<hr />
<div>'''Computing overhang''' refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.<br />
<br />
In the context of [[Artificial General Intelligence]], this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an [[intelligence explosion]], or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an [[existential risk]].<br />
<br />
==Examples==<br />
In 2010, the President's Council of Advisors on Science and Technology [http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf reported on] benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.<br />
<br />
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.<br />
<br />
Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren't deliberatly created with the goal of being modeled by AI. <br />
<br />
As Yudkoswky [http://intelligence.org/files/LOGI.pdf puts it], human intelligence, created by this "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - ''deliberation''. On the other hand, the rest and almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. As such, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that's completely different than the process that gave birth to our brains.<br />
<br />
==References==<br />
*{{cite book<br />
| last1 = Muehlhauser<br />
| first1 = Luke<br />
| last2 = Salamon<br />
| first2 = Anna<br />
| contribution = Intelligence Explosion: Evidence and Import<br />
| year = 2012<br />
| title = The singularity hypothesis: A scientific and philosophical assessment<br />
| editor1-last = Eden<br />
| editor1-first = Amnon<br />
| editor2-last = Søraker<br />
| editor2-first = Johnny<br />
| editor3-last = Moor<br />
| editor3-first = James H.<br />
| editor4-last = Steinhart<br />
| editor4-first = Eric<br />
| place = Berlin<br />
| publisher = Springer<br />
| contribution-url = http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf<br />
}}<br />
<br />
==See also==<br />
*[[Optimization process]]<br />
*[[Optimization]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11073Priors2012-10-13T01:17:23Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1. Furthermore, you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This specific phrase usually refers to a point estimate, or prior, already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond - and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. As such, the term prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11072Priors2012-10-13T01:14:55Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using [[Bayes's Theorem]], this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1. Furthermore, you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This specific phrase usually refers to a point estimate, or prior, already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond - and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. As such, the term prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11071Priors2012-10-13T01:13:34Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using [[Bayes's Theorem]], this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1. Furthermore, you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This specific phrase usually refers to a point estimate, or prior, already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond - and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". As such, prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11070Priors2012-10-13T01:13:15Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using [[Bayes's Theorem]], this referred prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1. Furthermore, you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This specific phrase usually refers to a point estimate, or prior, already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond - and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". As such, prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11069Priors2012-10-13T01:07:41Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for a this agent, to calculate a posterior probability using [[Bayes's Theorem]], a prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1. Furthermore, you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an [[inductive prior]] - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and [[likelihoods]] in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born; and where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This phrase usually refers to a point estimate already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
The probability that you start with before seeing the evidence. One of the inputs into [[Bayes's Theorem]].<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond; and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your "prior probability" in this case was actually based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". ("Prior probability" is more likely to refer to a single summary judgment of some variable's prior probability, versus a Bayesian's general "[[priors]]".)<br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11068Priors2012-10-13T01:06:44Z<p>Pedrochaves: </p>
<hr />
<div>In the [[Bayes's Theorem]] context, '''Priors''' refers to the beliefs an agent holds regarding a fact, hypothesis or consequence,before being presented with evidence. More technically, in order for a [[Bayesean]] to calculate a [[posterior probability]] using [[Bayes's Theorem]], a prior probability and [[likelihood distribution]] are needed.<br />
<br />
As a concrete example, suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1. Furthermore, you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an [[inductive prior]] - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and [[likelihoods]] in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born; and where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This phrase usually refers to a point estimate already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
The probability that you start with before seeing the evidence. One of the inputs into [[Bayes's Theorem]].<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond; and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your "prior probability" in this case was actually based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". ("Prior probability" is more likely to refer to a single summary judgment of some variable's prior probability, versus a Bayesian's general "[[priors]]".)<br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11066Priors2012-10-13T00:56:40Z<p>Pedrochaves: moved Prior Probability to Priors over redirect</p>
<hr />
<div>A [[Bayesian]] uses [[Bayes's Theorem]] to [[belief update | update beliefs]] based on the [[evidence]]. This requires that, even in advance of seeing the evidence, you have beliefs about what the evidence means - how likely you are to see the evidence, ''if'' various hypotheses are true - and how likely those hypotheses were, in ''advance'' of seeing the evidence. To calculate a [[posterior probability]] using [[Bayes's Theorem]], you need a prior probability and [[likelihood distribution]].<br />
<br />
Suppose you had a barrel containing some number of red and white balls. If you start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1, and you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1), then each red ball you see makes it ''more'' likely that the next ball will be red. (By [[Laplace's Rule of Succession]].)<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an [[inductive prior]]; things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive; the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and [[likelihoods]] in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born; and where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This phrase usually refers to a point estimate already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
The probability that you start with before seeing the evidence. One of the inputs into [[Bayes's Theorem]].<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond; and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your "prior probability" in this case was actually based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". ("Prior probability" is more likely to refer to a single summary judgment of some variable's prior probability, versus a Bayesian's general "[[priors]]".)<br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Prior_Probability&diff=11067Prior Probability2012-10-13T00:56:40Z<p>Pedrochaves: moved Prior Probability to Priors over redirect</p>
<hr />
<div>#REDIRECT [[Priors]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11064Priors2012-10-13T00:56:06Z<p>Pedrochaves: moved Priors to Prior Probability</p>
<hr />
<div>A [[Bayesian]] uses [[Bayes's Theorem]] to [[belief update | update beliefs]] based on the [[evidence]]. This requires that, even in advance of seeing the evidence, you have beliefs about what the evidence means - how likely you are to see the evidence, ''if'' various hypotheses are true - and how likely those hypotheses were, in ''advance'' of seeing the evidence. To calculate a [[posterior probability]] using [[Bayes's Theorem]], you need a prior probability and [[likelihood distribution]].<br />
<br />
Suppose you had a barrel containing some number of red and white balls. If you start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1, and you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1), then each red ball you see makes it ''more'' likely that the next ball will be red. (By [[Laplace's Rule of Succession]].)<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an [[inductive prior]]; things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive; the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and [[likelihoods]] in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born; and where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This phrase usually refers to a point estimate already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
The probability that you start with before seeing the evidence. One of the inputs into [[Bayes's Theorem]].<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond; and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your "prior probability" in this case was actually based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". ("Prior probability" is more likely to refer to a single summary judgment of some variable's prior probability, versus a Bayesian's general "[[priors]]".)<br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11063Priors2012-10-13T00:55:34Z<p>Pedrochaves: </p>
<hr />
<div>A [[Bayesian]] uses [[Bayes's Theorem]] to [[belief update | update beliefs]] based on the [[evidence]]. This requires that, even in advance of seeing the evidence, you have beliefs about what the evidence means - how likely you are to see the evidence, ''if'' various hypotheses are true - and how likely those hypotheses were, in ''advance'' of seeing the evidence. To calculate a [[posterior probability]] using [[Bayes's Theorem]], you need a prior probability and [[likelihood distribution]].<br />
<br />
Suppose you had a barrel containing some number of red and white balls. If you start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1, and you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1), then each red ball you see makes it ''more'' likely that the next ball will be red. (By [[Laplace's Rule of Succession]].)<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an [[inductive prior]]; things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive; the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and [[likelihoods]] in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born; and where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This phrase usually refers to a point estimate already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
The probability that you start with before seeing the evidence. One of the inputs into [[Bayes's Theorem]].<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond; and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your "prior probability" in this case was actually based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". ("Prior probability" is more likely to refer to a single summary judgment of some variable's prior probability, versus a Bayesian's general "[[priors]]".)<br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Prior_probability&diff=11062Prior probability2012-10-13T00:55:24Z<p>Pedrochaves: Blanked the page</p>
<hr />
<div></div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11059Priors2012-10-12T20:08:27Z<p>Pedrochaves: </p>
<hr />
<div>A [[Bayesian]] uses [[Bayes's Theorem]] to [[belief update | update beliefs]] based on the [[evidence]]. This requires that, even in advance of seeing the evidence, you have beliefs about what the evidence means - how likely you are to see the evidence, ''if'' various hypotheses are true - and how likely those hypotheses were, in ''advance'' of seeing the evidence. To calculate a [[posterior probability]] using [[Bayes's Theorem]], you need a [[prior probability]] and [[likelihood distribution]].<br />
<br />
Suppose you had a barrel containing some number of red and white balls. If you start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1, and you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1), then each red ball you see makes it ''more'' likely that the next ball will be red. (By [[Laplace's Rule of Succession]].)<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an [[inductive prior]]; things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive; the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and [[likelihoods]] in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born; and where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
{{wikilink|Prior probability}}<br />
This phrase usually refers to a point estimate already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
The probability that you start with before seeing the evidence. One of the inputs into [[Bayes's Theorem]].<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond; and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your "prior probability" in this case was actually based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". ("Prior probability" is more likely to refer to a single summary judgment of some variable's prior probability, versus a Bayesian's general "[[priors]]".)<br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
*[[Prior probability]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Priors&diff=11058Priors2012-10-12T20:07:53Z<p>Pedrochaves: </p>
<hr />
<div>{{wikilink|Prior probability}}<br />
A [[Bayesian]] uses [[Bayes's Theorem]] to [[belief update | update beliefs]] based on the [[evidence]]. This requires that, even in advance of seeing the evidence, you have beliefs about what the evidence means - how likely you are to see the evidence, ''if'' various hypotheses are true - and how likely those hypotheses were, in ''advance'' of seeing the evidence. To calculate a [[posterior probability]] using [[Bayes's Theorem]], you need a [[prior probability]] and [[likelihood distribution]].<br />
<br />
Suppose you had a barrel containing some number of red and white balls. If you start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1, and you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1), then each red ball you see makes it ''more'' likely that the next ball will be red. (By [[Laplace's Rule of Succession]].)<br />
<br />
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).<br />
<br />
Thus our prior can affect how we interpret the evidence. The first prior is an [[inductive prior]]; things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive; the more red balls we see, the fewer we expect to see in the future.<br />
<br />
In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and [[likelihoods]] in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born; and where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.<br />
<br />
==Prior probability==<br />
This phrase usually refers to a point estimate already based on considerable evidence - for example when we estimate the number of women who start out with breast cancer at age 40, in advance of performing any mammographies.<br />
<br />
The probability that you start with before seeing the evidence. One of the inputs into [[Bayes's Theorem]].<br />
<br />
Suppose there are a hundred boxes, one of which contains a diamond; and this is ''all'' you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.<br />
<br />
Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then this is [[evidence]] with a [[likelihood ratio]] of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to [[posterior odds]] of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."<br />
<br />
Your "prior probability" in this case was actually based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. For a more complicated notion of prior beliefs, including prior beliefs about the meaning of observations, see "[[priors]]". ("Prior probability" is more likely to refer to a single summary judgment of some variable's prior probability, versus a Bayesian's general "[[priors]]".)<br />
<br />
<br />
==Blog posts==<br />
<br />
*[http://lesswrong.com/lw/hk/priors_as_mathematical_objects/ Priors as Mathematical Objects]<br />
*[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"]<br />
*[http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ Probability is Subjectively Objective]<br />
*[http://lesswrong.com/lw/em/bead_jar_guesses/ Bead Jar Guesses] by [[Alicorn]] - Applied scenario about forming priors.<br />
<br />
==See also==<br />
<br />
*[[Evidence]]<br />
*[[Inductive bias]]<br />
*[[Belief update]]<br />
*[[Prior probability]]<br />
<br />
[[Category:Jargon]]<br />
[[Category:Concepts]]<br />
[[Category:Bayesian]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Computing_overhang&diff=11033Computing overhang2012-10-11T15:36:18Z<p>Pedrochaves: </p>
<hr />
<div>'''Computing overhang''' refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.<br />
<br />
In the context of [[Artificial General Intelligence]], this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an [[intelligence explosion]], or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an [[existential risk]].<br />
<br />
==Examples==<br />
In 2010, the President's Council of Advisors on Science and Technology [http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf reported on] benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.<br />
<br />
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.<br />
<br />
Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren't deliberatly created with the goal of being modeled by AI. <br />
<br />
As Yudkoswky [http://intelligence.org/files/LOGI.pdf puts it], human intelligence, created by this "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - ''deliberation''. On the other hand, almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. On the other hand, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that's completely different than the process that gave birth to our brains.<br />
<br />
==References==<br />
*{{cite book<br />
| last1 = Muehlhauser<br />
| first1 = Luke<br />
| last2 = Salamon<br />
| first2 = Anna<br />
| contribution = Intelligence Explosion: Evidence and Import<br />
| year = 2012<br />
| title = The singularity hypothesis: A scientific and philosophical assessment<br />
| editor1-last = Eden<br />
| editor1-first = Amnon<br />
| editor2-last = Søraker<br />
| editor2-first = Johnny<br />
| editor3-last = Moor<br />
| editor3-first = James H.<br />
| editor4-last = Steinhart<br />
| editor4-first = Eric<br />
| place = Berlin<br />
| publisher = Springer<br />
| contribution-url = http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf<br />
}}<br />
<br />
==See also==<br />
*[[Optimization process]]<br />
*[[Optimization]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Computing_overhang&diff=11032Computing overhang2012-10-11T15:33:01Z<p>Pedrochaves: </p>
<hr />
<div>'''Computing overhang''' refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.<br />
<br />
In the context of [[Artificial General Intelligence]], this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an [[intelligence explosion]], or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an [[existential risk]].<br />
<br />
==Examples==<br />
In 2010, the President's Council of Advisors on Science and Technology [http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf reported on] benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.<br />
<br />
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.<br />
<br />
Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren't deliberatly created with the goal of being modeled by AI. <br />
<br />
As Yudkoswky [http://intelligence.org/files/LOGI.pdf puts it], human intelligence, created by this "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - ''deliberation''. On the other hand, almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. <br />
<br />
On the other hand, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that's completely different than the process that gave birth to our brains.<br />
<br />
==References==<br />
*{{cite book<br />
| last1 = Muehlhauser<br />
| first1 = Luke<br />
| last2 = Salamon<br />
| first2 = Anna<br />
| contribution = Intelligence Explosion: Evidence and Import<br />
| year = 2012<br />
| title = The singularity hypothesis: A scientific and philosophical assessment<br />
| editor1-last = Eden<br />
| editor1-first = Amnon<br />
| editor2-last = Søraker<br />
| editor2-first = Johnny<br />
| editor3-last = Moor<br />
| editor3-first = James H.<br />
| editor4-last = Steinhart<br />
| editor4-first = Eric<br />
| place = Berlin<br />
| publisher = Springer<br />
| contribution-url = http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf<br />
}}<br />
<br />
==See also==<br />
*[[Optimization process]]<br />
*[[Optimization]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Computing_overhang&diff=11029Computing overhang2012-10-11T15:07:44Z<p>Pedrochaves: </p>
<hr />
<div>'''Computing overhang''' refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.<br />
<br />
In the context of [[Artificial General Intelligence]], this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an [[intelligence explosion]], or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an [[existential risk]].<br />
<br />
==Examples==<br />
In 2010, the President's Council of Advisors on Science and Technology [http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf reported on] benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.<br />
<br />
As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.<br />
<br />
Though estimates of [[whole brain emulation]] place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren't deliberatly created with the goal of being modeled by AI. As Yudkoswky [http://intelligence.org/files/LOGI.pdf puts it], human intelligence, created by evolution, is characterized by its design signature - and it has developed poorly adapted to ''deliberation''. On the other hand, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI.<br />
<br />
==References==<br />
*{{cite book<br />
| last1 = Muehlhauser<br />
| first1 = Luke<br />
| last2 = Salamon<br />
| first2 = Anna<br />
| contribution = Intelligence Explosion: Evidence and Import<br />
| year = 2012<br />
| title = The singularity hypothesis: A scientific and philosophical assessment<br />
| editor1-last = Eden<br />
| editor1-first = Amnon<br />
| editor2-last = Søraker<br />
| editor2-first = Johnny<br />
| editor3-last = Moor<br />
| editor3-first = James H.<br />
| editor4-last = Steinhart<br />
| editor4-first = Eric<br />
| place = Berlin<br />
| publisher = Springer<br />
| contribution-url = http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf<br />
}}<br />
<br />
==See also==<br />
*[[Optimization process]]<br />
*[[Optimization]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=11028Fun theory2012-10-11T14:53:30Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:<br />
<br />
<blockquote><br />
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be." <br />
</blockquote><br />
<br />
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
==Complex values and Fun Theory's solution==<br />
A key insight of Fun Theory, in its current embryonic form, is that ''eudaimonia'' - the classical framework where happinness is the ultimate huma goal - is [[Complexity of value|complicated]]. That is, there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.<br />
<br />
It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will [[wireheading|"wirehead"]] - stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent, then the human future will likely be very unpleasant. <br />
<br />
Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.<br />
<br />
Fun Theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall in a classical Utopian future.<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=11001Fun theory2012-10-10T20:39:12Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:<br />
<br />
<blockquote><br />
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be." <br />
</blockquote><br />
<br />
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
==Complex values and Fun Theory's solution==<br />
A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is [[Complexity of value|complicated]] - there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.<br />
<br />
It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will [[wireheading|"wirehead"]] - stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent, then the human future will likely be very unpleasant. <br />
<br />
Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.<br />
<br />
Fun Theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied and avoid the fall in a classical Utopian future.<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=11000Fun theory2012-10-10T20:35:57Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:<br />
<br />
<blockquote><br />
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be." <br />
</blockquote><br />
<br />
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
==Complex values and Fun Theory's solution==<br />
A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is [[Complexity of value|complicated]] - there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.<br />
<br />
It is a common mistake in discussion of future AI to extract only one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will [[wireheading|"wirehead"]] - stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent, then the human future will likely be very unpleasant. <br />
<br />
Enhanced humans are also seen to have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection, want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.<br />
<br />
Fun Theory thus seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10999Fun theory2012-10-10T20:32:50Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This future is sometimes conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:<br />
<br />
<blockquote><br />
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be." <br />
</blockquote><br />
<br />
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
==Complex values and Fun Theory's solution==<br />
<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10998Fun theory2012-10-10T20:30:40Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This can be conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia:<br />
<br />
<blockquote><br />
"It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be." <br />
</blockquote><br />
<br />
Imagining this perfect future where every problem is solved and where there is constant peace and rest - as seen, a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
==Complex values and Fun Theory's solution==<br />
<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10997Fun theory2012-10-10T20:27:30Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This can be conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia. Imagining a perfect future where every problem is solved and where there is constant peace and rest - in a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
<blockquote><br />
It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be. <br />
</blockquote><br />
<br />
==Complex values and Fun Theory's solution==<br />
<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10996Fun theory2012-10-10T20:22:23Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of [[Eliezer Yudkowsky]] while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
Transhumanists are usually seen as working towards a better human future. This can be conceptualized, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly describes it], as an Utopia. Imagining a perfect future where every problem is solved and where there is constant peace and rest - in a close parallel to several religious Heavens - rapidly leads to the conclusion that no one would actually want to live there.<br />
<br />
==Complex values and Fun Theory's solution==<br />
<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10980Fun theory2012-10-10T14:22:42Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
<br />
<br />
==Complex values and Fun Theory's solution==<br />
<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10979Fun theory2012-10-10T14:22:34Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.<br />
<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
<br />
<br />
==Complex values and Fun Theory's solution==<br />
<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10978Fun theory2012-10-10T14:22:03Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== The concept of Utopia==<br />
<br />
<br />
==Complex values and Fun Theory's solution==<br />
<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10977Fun theory2012-10-10T14:19:30Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. Likewise, when superhumanly intelligent machines take care of our every need, it is apparent that no challenges nor fun will remain. As such, we should avoid that path of development. <br />
<br />
The implicit open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight, entertain and challenge ever more powerful and resourceful minds.<br />
<br />
== Utopia==<br />
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it] Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually ''want'' to live. <br />
<br />
<blockquote><br />
It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be. <br />
</blockquote><br />
<br />
==Fun Theory and complex values==<br />
A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is [[Complexity of value|complicated]]--there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.<br />
<br />
It is a common mistake in discussion of future AI extract one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent the human future will likely be very unpleasant. <br />
Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.<br />
<br />
==Keep curiosity and challenge alive==<br />
Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10976Fun theory2012-10-10T14:18:01Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as a countering argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. As such, we should avoid that path of development. <br />
<br />
The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds. Likewise, when superhumanly intelligent machines take care of our every need, what fun and challenges will remain? <br />
<br />
== Utopia==<br />
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it] Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually ''want'' to live. <br />
<br />
<blockquote><br />
It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be. <br />
</blockquote><br />
<br />
==Fun Theory and complex values==<br />
A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is [[Complexity of value|complicated]]--there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.<br />
<br />
It is a common mistake in discussion of future AI extract one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent the human future will likely be very unpleasant. <br />
Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.<br />
<br />
==Keep curiosity and challenge alive==<br />
Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Fun_theory&diff=10975Fun theory2012-10-10T14:17:20Z<p>Pedrochaves: </p>
<hr />
<div>'''Fun theory''' is the field of knowledge occupied with studying the concept of fun (as the opposite of boredom). It tries to answer problems such as how we should quantify it, how desirable it is, and how it relates to the human living experience. It has been one of the major interests of Eliezer Yudkowsky while writing for Less Wrong.<br />
<br />
==The argument against Enlightment==<br />
While discussing [[transhumanism]] and related fields such as cryonics or lifespan extension, it has been brought up as an argument by conservatives that such enhancements would bring boredom and the end of fun as we know it. More specifically, if we self-improve human minds to extreme levels of intelligence, all challenges known today may bore us. As such, we should avoid that path of development. <br />
<br />
The open question is whether the universe will offer, or we ourselves can create, ever more complex and sophisticated opportunities to delight and challenge ever more powerful minds. Likewise, when superhumanly intelligent machines take care of our every need, what fun and challenges will remain? <br />
<br />
== Utopia==<br />
Transhumanists work towards a much better human future--a Utopia--but, as George Orwell [http://www.orwell.ru/library/articles/socialists/english/e_fun aptly described it] Utopians of all stripes, Socialist, Enlightenment, or Christian, have generally been unable to imagine futures where anyone would actually ''want'' to live. <br />
<br />
<blockquote><br />
It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns... [W]hat it could not do was to describe a condition in which the ordinary human being actively wanted to be. <br />
</blockquote><br />
<br />
==Fun Theory and complex values==<br />
A key insight of Fun Theory, in its current embryonic form, is that eudaimonia is [[Complexity of value|complicated]]--there are many properties which contribute to a life worth living. We humans require many things to experience a fulfilled life: Aesthetic stimulation, pleasure, love, social interaction, learning, challenge, and much more.<br />
<br />
It is a common mistake in discussion of future AI extract one element of the human preferences and advocate that it alone be maximized. This would lead to neglect of all the other values. For example, we simply optimize for pleasure or happiness, we will "wirehead"--stimulate the relevant parts of our brain and experience bliss for eternity, but pursue no other experiences. If almost ''any'' element of our value system is absent the human future will likely be very unpleasant. <br />
Enhanced humans will also have the value system of humans today, but we may choose to change it as we self-enhance. We may want to alter our own value system, by eliminating values, like bloodlust, which on reflection we wish were absent. But there are many values which we, on reflection want to keep, and since we humans have no basis for a value system other than our current value system, Fun Theory must seek to maximize the value system that we have, rather than inventing new values.<br />
<br />
==Keep curiosity and challenge alive==<br />
Fun theory seeks to let us keep our curiosity and love of learning intact, while preventing the extremes of boredom possible in a transhuman future if our strongly boosted intellects have exhausted all challenges. More broadly, fun theory seeks to allow humanity to enjoy life when all needs are easily satisfied.<br />
<br />
==External links==<br />
* George Orwell, [http://www.orwell.ru/library/articles/socialists/english/e_fun Why Socialists Don't Believe in Fun]<br />
* David Pearce, [http://paradise-engineering.com/ Paradise Engineering] and [http://www.hedweb.com/hedab.htm The Hedonistic Imperative] provides a more nuanced alternative to wireheading.<br />
<br />
==See also==<br />
*[[The Fun Theory Sequence]]<br />
*[[Complexity of value]]<br />
*[[Metaethics sequence]]<br />
<br />
[[Category:Theses]]</div>Pedrochaveshttps://wiki.lesswrong.com/index.php?title=Reflective_decision_theory&diff=10974Reflective decision theory2012-10-10T13:55:20Z<p>Pedrochaves: </p>
<hr />
<div>'''Reflective decision theory''' is a term occasionally used to refer to a decision theory that would allow an agent to take actions in a way that they do not trigger regret. This regret is conceptualized, according to the [[Causal Decision Theory]], as a [[Reflective inconsistency]], a divergence between the agent who took the action and the ''same'' agent reflecting upon it after.<br />
<br />
==The Newcomb's Problem example==<br />
This problem represents the best example of what [[Eliezer Yudkowsky]] calls the [http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/ regret of rationality]. <br />
Simply put, consider an alien superintelligence that comes to you and wants to play a simple game:<br />
<br />
:He sets two boxes in front of you - Box A and Box B.<br />
:Box A is transparent and has 1000 dollars inside. Box B is opaque and can contain 1000000 dollars or nothing.<br />
<br />
:You can choose to take ''both'' boxes or to take only Box B.<br />
<br />
:The catch is: this superintelligence is a Predictor (which has been making ''correct'' predictions), and will only put the 1000000 dollars in Box B if, and only if, it predicts you will choose Box B.<br />
<br />
:By the time you decide, the alien has already made the prediction and left the scene, and you are faced with the choice. A or B?<br />
<br />
The dominant view in the literature regards chosing ''both'' boxes as the more rational decision, although the alien actually rewards irrational agents. When considering thought experiments such as this, it's suggested that a sufficiently powerful [[AGI]] would solve it by being able to access its own source code and to self-modify. This would allow it to alter its own behavior and decision process, beating the paradox through the definition of a ''precommitment'' to a certain choice in such situations. <br />
<br />
In order for us to understand the AGI's behavior in this and other situations and to be able to implement it, we will have to create a reflectively consistent decision theory. Particularly, reflective consistency would be needed to ensure that it preserved a [[Friendly Artificial Intelligence|friendly]] value system throughout its self-modifications.<br />
<br />
Eliezer Yudkowsky's has proposed a theoretical solution to the reflective decision theory problem in his [[Timeless Decision Theory]].<br />
<br />
==Further Reading & References==<br />
*[http://intelligence.org/upload/TDT-v01o.pdf Timeless Decision Theory] by Eliezer Yudkowsky<br />
*[http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/ Interview] of Eliezer Yudkowsky by John Baez, March 7th, 2011<br />
<br />
==See also==<br />
<br />
*[[Decision theory]]<br />
*[[Reflective inconsistency]]<br />
*[[Timeless Decision Theory]]<br />
*[[Complexity of value]]<br />
<br />
[[Category:Concepts]]</div>Pedrochaves