The Riemann hypothesis seems like a special case, since it's a purely mathematical proposition. A real world problem is more likely to require Eliezer's brand of FAI.
Also, I believe solving FAI requires solving a problem not on your list, namely that of solving GAI. :-)
If you disagree that (a) looks easier than (b), congratulations, you've been successfully brainwashed by Eliezer :-)
This was supposed to be humour, right?
This was supposed to be humour, right?
OK, that didn't come across as intended. Edited the post.
A real world problem is more likely to require Eliezer's brand of FAI.
It seems to me that human engineers don't spend a lot of time thinking about the value of boredom or the problem of consciousness when they design airplanes. Why should an AI need to do that? If the answer involves "optimizing too hard", then doesn't the injunction "don't optimize too hard" look easier to formalize than CEV?
According to Eliezer, making AI safe requires solving two problems:
1) Formalize a utility function whose fulfillment would constitute "good" to us. CEV is intended as a step toward that.
2) Invent a way to code an AI so that it's mathematically guaranteed not to change its goals after many cycles of self-improvement, negotiations etc. TDT is intended as a step toward that.
It is obvious to me that (2) must be solved, but I'm not sure about (1). The problem in (1) is that we're asked to formalize a whole lot of things that don't look like they should be necessary. If the AI is tasked with building a faster and more efficient airplane, does it really need to understand that humans don't like to be bored?
To put the question sharply, which of the following looks easier to formalize:
a) Please output a proof of the Riemann hypothesis, and please don't get out of your box along the way.
b) Please do whatever the CEV of humanity wants.
Note that I'm not asking if (a) is easy in absolute terms, only if it's easier than (b). If you disagree that (a) looks easier than (b), why?