If you see the optimisation score as being attached to a particular system (agent+code+hardware+power available), then there isn't a problem. It's only if you want to talk about the optimisation power of an algorithm in a platonic sense, that the definition fails.
Essentially I agree that that particular objection is largely ineffectual.
Upvoted because admitting to error is rare and admirable, even on Less Wrong :-)
As every school child knows, an advanced AI can be seen as an optimisation process - something that hits a very narrow target in the space of possibilities. The Less Wrong wiki entry proposes some measure of optimisation power:
This doesn't seem a fully rigorous definition - what exactly is meant by a million random tries? Also, it measures how hard it would be to come up with that solution, but not how good that solution is. An AI that comes up with a solution that is ten thousand bits more complicated to find, but that is only a tiny bit better than the human solution, is not one to fear.
Other potential measurements could be taking any of the metrics I suggested in the reduced impact post, but used in reverse: to measure large deviations from the status quo, not small ones.
Anyway, before I reinvent the coloured wheel, I just wanted to check whether there was a fully defined agreed upon measure of optimisation power.