Difference between revisions of "Priors"

From Lesswrongwiki
Jump to: navigation, search
Line 1: Line 1:
 
In the context of [[Bayes's Theorem]], '''Priors''' refer generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence.  More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.
 
In the context of [[Bayes's Theorem]], '''Priors''' refer generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence.  More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and [[likelihood distribution]] are needed.
  
As a concrete example, suppose you had a barrel containing some number of red and white balls.  You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).
+
==Examples==
 +
 
 +
Suppose you had a barrel containing some number of red and white balls.  You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it ''more'' likely that the next ball will be red (following a [http://en.wikipedia.org/wiki/Rule_of_succession Laplacian Rule of Sucession]).
  
 
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).
 
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it ''less'' likely that the next ball will be red (because there are fewer red balls remaining).
Line 21: Line 23:
  
 
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond.  Indeed, someone told you how the detector worked - what sort of evidence a beep represented. In conclusion, the term prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.
 
Your '''prior probability''' in this case was actually a prior belief based on a certain amount of information - i.e., someone ''told'' you that one out of a hundred boxes contained a diamond.  Indeed, someone told you how the detector worked - what sort of evidence a beep represented. In conclusion, the term prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of '''priors'''.
 +
 +
==References==
 +
 +
<references />
  
 
==Blog posts==
 
==Blog posts==

Revision as of 01:41, 19 October 2012

In the context of Bayes's Theorem, Priors refer generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence. More technically, in order for this agent to calculate a posterior probability using Bayes's Theorem, this referred prior probability and likelihood distribution are needed.

Examples

Suppose you had a barrel containing some number of red and white balls. You start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability. Furthermore, you start out ignorant of this fixed probability (the parameter could be anywhere between 0 and 1). Each red ball you see then makes it more likely that the next ball will be red (following a Laplacian Rule of Sucession).

On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it less likely that the next ball will be red (because there are fewer red balls remaining).

Thus our prior can affect how we interpret the evidence. The first prior is an inductive prior - things that happened before are predicted to happen again with greater probability. The second prior is anti-inductive - the more red balls we see, the fewer we expect to see in the future.

In both cases, you started out believing something about the barrel - presumably because someone else told you, or because you saw it with your own eyes. But then their words, or even your own eyesight, was evidence, and you must have had prior beliefs about probabilities and likelihoods in order to interpret the evidence. So it seems that an ideal Bayesian would need some sort of inductive prior at the very moment they were born. Where an ideal Bayesian would get this prior, has occasionally been a matter of considerable controversy in the philosophy of probability.

As a real life example, consider two leaders from different political parties. Each one has his own beliefs about social organization and the roles of people and government in society. These differences can be attributed to a wide range of factors, from genetic variability to education influence in their personalities and condition the politics and laws they want to implement. However, neither can show that his beliefs are better than those of the other, unless he can show that his priors were generated by sources which track reality better[1].


Prior probability

Smallwikipedialogo.png
Wikipedia has an article about

This specific term usually refers to a prior already based on considerable evidence - for example when we estimate the number of red balls after doing 100 similar experiments or hearing about how the box was created.

As a complementary example, suppose there are a hundred boxes, one of which contains a diamond - and this is all you know about the boxes. Then your prior probability that a box contains a diamond is 1%, or prior odds of 1:99.

Later you may run a diamond-detector over a box, which is 88% likely to beep when a box contains a diamond, and 8% likely to beep (false positive) when a box doesn't contain a diamond. If the detector beeps, then represents the introduction of evidence, with a likelihood ratio of 11:1 in favor of a diamond, which sends the prior odds of 1:99 to posterior odds of 11:99 = 1:9. But if someone asks you "What was your prior probability?" you would still say "My prior probability was 1%, but I saw evidence which raised the posterior probability to 10%."

Your prior probability in this case was actually a prior belief based on a certain amount of information - i.e., someone told you that one out of a hundred boxes contained a diamond. Indeed, someone told you how the detector worked - what sort of evidence a beep represented. In conclusion, the term prior probability is more likely to refer to a single summary judgment of some variable's prior probability, versus the above Bayesian's general notion of priors.

References

  1. Robin Hanson (2006). "Uncommon Priors Require Origin Disputes". Theory and Decision 61 (4) 319–328. http://hanson.gmu.edu/prior.pdf

Blog posts

See also