# Difference between revisions of "Likelihood ratio"

m (Fixing the typo in the url) |
AstraSequi (talk | contribs) (from Odds) |
||

Line 1: | Line 1: | ||

{{arbitallink|https://arbital.com/p/likelihood_ratio/|Likelihood ratio}} | {{arbitallink|https://arbital.com/p/likelihood_ratio/|Likelihood ratio}} | ||

− | + | A likelihood ratio is the ratio of two probabilities. It is often used to compare two hypotheses or models to measure the relative strength of evidence supporting them. | |

+ | |||

+ | It is used in the [[Odds|odds form]] of [[Bayes' theorem]], the likelihood ratio is the relative probability of B being observed if hypothesis A is true, versus B being observed if hypothesis ¬A is true. Therefore, a Bayesian update can be calculated by converting the prior probability to odds, multiplying by the likelihood ratio, and converting the posterior odds back to probability. Knowing the probabilities for observing the evidence is unnecessary, only how many times more likely it is under one hypothesis than the other. | ||

+ | |||

+ | If the likelihood ratio is known, Bayesian updates are faster and more intuitive to calculate using the odds form. For example, if you know that A being true makes the observation of B twice as likely as when ¬A is true, the update can be calculated by converting the prior to odds, multiplying by two, and converting back. Additionally, if the prior is low, probability and odds can be approximated as each other (p=0.1 iff odds=0.111, and p=0.01 iff odds=0.0101), so the posterior probability can be approximated by skipping the conversion and simply multiplying by two. | ||

+ | |||

==Blog posts== | ==Blog posts== | ||

## Latest revision as of 19:44, 7 February 2020

A likelihood ratio is the ratio of two probabilities. It is often used to compare two hypotheses or models to measure the relative strength of evidence supporting them.

It is used in the odds form of Bayes' theorem, the likelihood ratio is the relative probability of B being observed if hypothesis A is true, versus B being observed if hypothesis ¬A is true. Therefore, a Bayesian update can be calculated by converting the prior probability to odds, multiplying by the likelihood ratio, and converting the posterior odds back to probability. Knowing the probabilities for observing the evidence is unnecessary, only how many times more likely it is under one hypothesis than the other.

If the likelihood ratio is known, Bayesian updates are faster and more intuitive to calculate using the odds form. For example, if you know that A being true makes the observation of B twice as likely as when ¬A is true, the update can be calculated by converting the prior to odds, multiplying by two, and converting back. Additionally, if the prior is low, probability and odds can be approximated as each other (p=0.1 iff odds=0.111, and p=0.01 iff odds=0.0101), so the posterior probability can be approximated by skipping the conversion and simply multiplying by two.