Open problems are clearly defined problems1 that have not been solved. In older fields, such as Mathematics, the list is rather intimidating. Rationality, on the other, seems to have no list.
While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?
The catch is that these problems cannot be inherently fuzzy problems. "How do I become less wrong?" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?
See also: Open problems at LW:Wiki
1: "Clearly defined" essentially means a formal, unambiguous definition. "Solving" such a problem would constitute a formal proof.
The model of environment values observations and behaviors, not statements about "uncomputability" and such. No observation should be left out, declared impossible. If you, as a human, decide to trust in something you label "halting oracle", that's your decision, and this is a decision you'd want any trusted AI to carry through as well.
I suspect that the roots of this confusion are something not unlike mind projection fallacy, with magical properties attributed to models, but I'm not competent to discuss domain-specific aspects of this question.