# Difference between revisions of "Optimization process"

(writing) |
m |
||

Line 9: | Line 9: | ||

Or consider the famous chessplaying computer, [[wikipedia:Deep Blue (chess computer)|Deep Blue]]. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but ''as'' a chessplayer, it was massively more effective than virtually all humans. Humans or evolution are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this ''optimization process'' abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.) | Or consider the famous chessplaying computer, [[wikipedia:Deep Blue (chess computer)|Deep Blue]]. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but ''as'' a chessplayer, it was massively more effective than virtually all humans. Humans or evolution are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this ''optimization process'' abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.) | ||

− | Optimization, like [[Amount of evidence|evidence]], can be measured in information-theoretic bits. We take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a sheerly random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes laws and general principles for reasoning about optimization; | + | Optimization, like [[Amount of evidence|evidence]], can be measured in information-theoretic bits. We take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a sheerly random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes laws and general principles for reasoning about optimization; as with [[Bayesian probability|probability theory]], you oftentimes can't apply the math directly. |

==Blog posts== | ==Blog posts== |

## Revision as of 13:05, 20 November 2009

An **optimization process** is a process that systematically comes up with solutions that are higher rather than lower relative to some ordering over outcomes; it hits small targets in a large search space, comes up with out comes that you would *not* expect to see by sheer random chance, atoms bumping up against each other with no direction or ordering at all. If an entity pushes reality into some state — across many contexts, not just by accident — then you could say it prefers that state.

Optimization is a very general notion that encompasses all kinds of order-generating processes other than sheer emergence; optimization is about choosing or selecting outcomes defined as better.

Probably the optimization process you're most familiar with is that of human intelligence. Humans don't do things randomly: we have very specific goals and rearrange the world in specific ways to meet our goals. Consider the monitor on which you read these words. That monitor is a *rather unlikely* object to have come about by chance, and so of course, it didn't. Human economies over many years built up the infrastructure needed to build a monitor, and then built it, because people prefer to be able to see their files. It might not seem so impressive if you're used to it, but there's a lot of cognitive work that goes on behind the scenes.

Another example of an optimization process would be natural selection, notable for its "first" status if not its power or speed. Evolution works because organisms that do better at surviving and reproducing propagate more of their traits to the next generation; in this way genes with higher fitness are systematically preferred, and complex machinery bearing the strange design signature of evolved things can be built up over time.

Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but *as* a chessplayer, it was massively more effective than virtually all humans. Humans or evolution are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this *optimization process* abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

Optimization, like evidence, can be measured in information-theoretic bits. We take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a sheerly random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can't apply the math directly.