The more competent AIs will be conquering the universe, so it's value of the universe being optimized in each of the possible ways that's playing against low measure.
If that's what we're worried about, then we might as well ask whether it's risky to randomly program a classical computer and then run it.
From David Deutsch's The Beginning of Infinity:
I'm not so sure we have the computing power to "simulate a person," but suppose we did. (Perhaps we will soon.) How would you respond to this worry?