That was the terminology - though it isn't just the terminology that is busted here.
Frankly, I find it hard to believe that you seem to be taking this idea seriously.
I haven't decided whether the idea is good or bad yet - I haven't yet evaluated it properly.
But as far as I can tell, your objection to it is incorrect. A naive search program would have very low optimisation power by Eliezer's criteria - is there a flaw in my argument?
As every school child knows, an advanced AI can be seen as an optimisation process - something that hits a very narrow target in the space of possibilities. The Less Wrong wiki entry proposes some measure of optimisation power:
This doesn't seem a fully rigorous definition - what exactly is meant by a million random tries? Also, it measures how hard it would be to come up with that solution, but not how good that solution is. An AI that comes up with a solution that is ten thousand bits more complicated to find, but that is only a tiny bit better than the human solution, is not one to fear.
Other potential measurements could be taking any of the metrics I suggested in the reduced impact post, but used in reverse: to measure large deviations from the status quo, not small ones.
Anyway, before I reinvent the coloured wheel, I just wanted to check whether there was a fully defined agreed upon measure of optimisation power.