# Difference between revisions of "Bayesian decision theory"

Bayesian decision theory refers to a decision theory which is informed by Bayesian probability. It is a statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs. An agent operating under such a decision theory uses the concepts of Bayesian statistics to estimate the expected value of its actions, and update its expectations based on new information. These agents can and are usually referred to as estimators.

From the perspective of Bayesian decision theory, any kind of probability distribution - such as the distribution for tomorrow's weather - represents a prior distribution. That is, it represents how we expect today the weather is going to be tomorrow. This contrasts with frequentist inference, the classical probability interpretation, where conclusions about an experiment are drawn from a set of repetitions of such experience, each producing statistically independent results. For a frequentist, a probability function would be a simple distribution function with no special meaning.

Suppose we intend to meet a friend tomorrow, and expect an 0.5 chance of raining. If we are choosing between various options for the meeting, with the pleasantness of some of the options (such as going to the park) being affected by the possibility of rain, we can assign values to the different options with or with rain. We can then pick the option whose expected value is the highest, given the probability of rain.

One definition of rationality, used both on Less Wrong and in economics and psychology, is behavior which obeys the rules of Bayesian decision theory. Due to computational constraints, this is impossible to do perfectly, but naturally evolved brains do seem to mirror these probabilistic methods when they adapt to an uncertain environment. Such models and distributions may be reconfigured according to feedback from the environment.