**Previously in series**: Aiming at the Target

Yesterday I spoke of how "When I think you’re a powerful intelligence, and I think I know something about your preferences, then I’ll predict that you’ll steer reality into regions that are higher in your preference ordering."

You can quantify this, at least in theory, supposing you have (A) the agent or optimization process’s preference ordering, and (B) a measure of the space of outcomes – which, for discrete outcomes in a finite space of possibilities, could just consist of counting them – then you can quantify how small a target is being hit, within how large a greater region.

Then we count the total number of states with equal or greater rank in the preference ordering to the outcome achieved, or integrate over the measure of states with equal or greater rank. Dividing this by the total size of the space gives you the relative smallness of the target – did you hit an outcome that was one in a million? One in a trillion?

Actually, most optimization processes produce "surprises" that are exponentially more improbable than this – you’d need to try far more than a trillion random reorderings of the letters in a book, to produce a play of quality equalling or exceeding Shakespeare. So we take the log base two of the reciprocal of the improbability, and that gives us optimization power in bits.

This figure – roughly, the improbability of an "equally preferred" outcome being produced by a random selection from the space (or measure on the space) – forms the foundation of my Bayesian view of intelligence, or to be precise, optimization power. It has many subtleties:

Continue reading "Measuring Optimization Power" »

**GD Star Rating**

*loading...*