Open problems are clearly defined problems1 that have not been solved. In older fields, such as Mathematics, the list is rather intimidating. Rationality, on the other, seems to have no list.
While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?
The catch is that these problems cannot be inherently fuzzy problems. "How do I become less wrong?" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?
See also: Open problems at LW:Wiki
1: "Clearly defined" essentially means a formal, unambiguous definition. "Solving" such a problem would constitute a formal proof.
What does it mean to deal rationally with moral uncertainty? If Nick Bostrom and Toby Ord's solution is right, how do we apply it in practice?
ETA: this isn't a "clearly defined" question in the sense you mentioned, but I'll leave it up anyway; apologies
As I pointed out in that thread, their solution doesn't work. You would need to choose an aggregation mechanism to combine votes. Different mechanisms will cause different systematic outcomes. Notably, some mechanisms will result in always choosing actions from one category; some mechanisms will result in sampling from different categories proportionally to their votes (much as, eg., the American system always chooses the most popular candidate, resulting in a 2-party system equilibrium; many European systems allocate seats proportionally to votes, allo... (read more)