With FAI, you have a commensurate reason to take the risk.
Sure, but if the Oracle AI is used as a stepping stone towards FAI, then you also have a reason to take the risk.
I guess you could argue that the risk of Oracle + Friendly AI is higher than just going straight for FAI, but you can't be sure how much the FAI risk could be mitigated by the Oracle AI (or any other type of not-so-powerful / constrained / narrow-domain AI). At least it doesn't seem obvious to me.
To the extent you should expect it to be useful. It's not clear in what way it can even in principle help with specifying morality. (See also this thread.)
Assume you have a working halting oracle. Now what? (Actually you could get inside to have infinite time to think about the problem.)
According to Eliezer, making AI safe requires solving two problems:
1) Formalize a utility function whose fulfillment would constitute "good" to us. CEV is intended as a step toward that.
2) Invent a way to code an AI so that it's mathematically guaranteed not to change its goals after many cycles of self-improvement, negotiations etc. TDT is intended as a step toward that.
It is obvious to me that (2) must be solved, but I'm not sure about (1). The problem in (1) is that we're asked to formalize a whole lot of things that don't look like they should be necessary. If the AI is tasked with building a faster and more efficient airplane, does it really need to understand that humans don't like to be bored?
To put the question sharply, which of the following looks easier to formalize:
a) Please output a proof of the Riemann hypothesis, and please don't get out of your box along the way.
b) Please do whatever the CEV of humanity wants.
Note that I'm not asking if (a) is easy in absolute terms, only if it's easier than (b). If you disagree that (a) looks easier than (b), why?