# Difference between revisions of "Inductive bias"

From Lesswrongwiki

m (→See Also) |
|||

Line 3: | Line 3: | ||

{{Quote| | {{Quote| | ||

The inductive bias of a learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered | The inductive bias of a learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered | ||

− | | | + | |Tom Mitchell<ref> |

+ | {{cite journal | ||

+ | |author = Tom M. Mitchell | ||

+ | |year = 1980 | ||

+ | |title = The need for biases in learning generalizations | ||

+ | |format = PDF | ||

+ | |url = http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.120.4179 | ||

+ | }}</ref>}} | ||

In [[Bayesian]] framework, inductive bias is encoded in the [[prior distribution]]. | In [[Bayesian]] framework, inductive bias is encoded in the [[prior distribution]]. | ||

Line 11: | Line 18: | ||

*[[Statistical bias]] | *[[Statistical bias]] | ||

− | == | + | ==Footnotes== |

− | < | + | <references/> |

− | + | ||

+ | ==Blog posts== | ||

+ | *[http://lesswrong.com/lw/hg/inductive_bias/ "Inductive Bias"] by [[Eliezer Yudkowsky]] | ||

− | |||

− | |||

− | |||

− | |||

{{stub}} | {{stub}} |

## Revision as of 09:23, 30 May 2009

The inductive bias of a learning algorithm is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered

—Tom Mitchell^{[1]}

In Bayesian framework, inductive bias is encoded in the prior distribution.

## See also

## Footnotes

- ↑
Tom M. Mitchell (1980) (PDF).
*The need for biases in learning generalizations*. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.120.4179.