Measuring based on bits doesn't seem to capture what we care about.
Consider the following two functions: F(x,y) = x+y and G(x,y) = the value of boolean expression x with the variables given truth-assignment y. In each case, I am given some value x and z and want to pick a y so that F(x,y) =z or G(x,y) = z.
Doing this for F is trivial, and for G is not known to be tractable. An AI able to efficiently solve SAT is much more powerful than being able to do subtraction. But you can arrange to have the same chance of succeeding by random chance in each case.
A definition that says that gnu "bc" is as powerful as an oracle for all problems in NP is not a very useful definition.
Moral: Knowing the chance to succeed "by chance" doesn't tell you much about how sophisticated an algorithm is.
As every school child knows, an advanced AI can be seen as an optimisation process - something that hits a very narrow target in the space of possibilities. The Less Wrong wiki entry proposes some measure of optimisation power:
This doesn't seem a fully rigorous definition - what exactly is meant by a million random tries? Also, it measures how hard it would be to come up with that solution, but not how good that solution is. An AI that comes up with a solution that is ten thousand bits more complicated to find, but that is only a tiny bit better than the human solution, is not one to fear.
Other potential measurements could be taking any of the metrics I suggested in the reduced impact post, but used in reverse: to measure large deviations from the status quo, not small ones.
Anyway, before I reinvent the coloured wheel, I just wanted to check whether there was a fully defined agreed upon measure of optimisation power.