# Difference between revisions of "Optimization process"

Alex Altair (talk | contribs) (→Measuring optimization power) |
Pedrochaves (talk | contribs) |
||

Line 1: | Line 1: | ||

− | An '''optimization process''' is | + | An '''optimization process''' is any kind of process that systematically comes up with solutions that are better than the solution used before. More technically, this kind of process is one that performs searches in a large search space, hitting small, low probability targets. When this process is gradually guided by some agent into some specific state, through searching specific targets, we can say it [[preference|prefers]] that state. |

− | + | The best way to exemplify an optimization process is through a simple example: [[Eliezer Yudkowsky]] suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and hit small targets: efficient mutations. | |

− | + | Consider the human being. We are a ''rather unlikely'' object to have [[Locate the hypothesis|come about by chance]], and so of course, [[Antiprediction|it didn't]]. Natural selection, over millions of years, built up the infrastructure needed to build a full functioning body, and it ended up creating it, because people are rather efficient replicators. | |

− | |||

− | |||

− | |||

− | |||

− | |||

Or consider the famous chessplaying computer, [[wikipedia:Deep Blue (chess computer)|Deep Blue]]. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but ''as'' a chessplayer, it was massively more effective than virtually all humans. Humans or evolution are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this ''optimization process'' abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.) | Or consider the famous chessplaying computer, [[wikipedia:Deep Blue (chess computer)|Deep Blue]]. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but ''as'' a chessplayer, it was massively more effective than virtually all humans. Humans or evolution are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this ''optimization process'' abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.) | ||

==Measuring optimization power== | ==Measuring optimization power== | ||

+ | One way to think mathematically about optimization, like [[Amount of evidence|evidence]], is in information-theoretic bits. The optimization power is the amount of [http://en.wikipedia.org/wiki/Self-information surprise] we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with [[Bayesian probability|probability theory]], you oftentimes can't apply the math directly. | ||

− | + | ==Further Reading & References== | |

− | |||

− | == | ||

− | |||

*[http://lesswrong.com/lw/rk/optimization_and_the_singularity/ Optimization and the Singularity] | *[http://lesswrong.com/lw/rk/optimization_and_the_singularity/ Optimization and the Singularity] | ||

*[http://lesswrong.com/lw/tx/optimization/ Optimization] | *[http://lesswrong.com/lw/tx/optimization/ Optimization] | ||

Line 20: | Line 13: | ||

==See also== | ==See also== | ||

− | |||

*[[Preference]] | *[[Preference]] | ||

*[[Really powerful optimization process]] | *[[Really powerful optimization process]] |

## Revision as of 03:22, 30 September 2012

An **optimization process** is any kind of process that systematically comes up with solutions that are better than the solution used before. More technically, this kind of process is one that performs searches in a large search space, hitting small, low probability targets. When this process is gradually guided by some agent into some specific state, through searching specific targets, we can say it prefers that state.
The best way to exemplify an optimization process is through a simple example: Eliezer Yudkowsky suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and hit small targets: efficient mutations.
Consider the human being. We are a *rather unlikely* object to have come about by chance, and so of course, it didn't. Natural selection, over millions of years, built up the infrastructure needed to build a full functioning body, and it ended up creating it, because people are rather efficient replicators.
Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but *as* a chessplayer, it was massively more effective than virtually all humans. Humans or evolution are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this *optimization process* abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

## Measuring optimization power

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. The optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can't apply the math directly.