Open problems are clearly defined problems1 that have not been solved. In older fields, such as Mathematics, the list is rather intimidating. Rationality, on the other, seems to have no list.
While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?
The catch is that these problems cannot be inherently fuzzy problems. "How do I become less wrong?" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?
See also: Open problems at LW:Wiki
1: "Clearly defined" essentially means a formal, unambiguous definition. "Solving" such a problem would constitute a formal proof.
Here's an open problem that's been on my mind this past week:
Take some controversial question on which there are a small number of popular opinions. Draw a line going from 0 on the left to 1 on the right. Divide that line into segments for each opinion that holds > 1% of opinion-space.
Now stratify the population by IQ into 10-point intervals. Redo the process, drawing a new line from 0 to 1 for each IQ range and dividing it into segments. Then stack your 0-1 line segments up vertically. Connect the sections for the same opinions in each IQ group.
What does the resulting picture look like? Questions include:
Other variables could be used on the vertical axis. In general, continuous variables (IQ, educational level, year of survey, income) are more interesting to me than category variables (race, sex). I'm trying to get at the question of how systematic misunderstandings are, how reliably increasing understanding increases accuracy, and whether increasing understanding of a topic increases or decreases the chances of agreement on it.
I noticed recently my impression that certain wrong opinions reliably attract people of a certain level of understanding of a problem. In some domains, increasing someone's intelligence or awareness seems to decrease their chances of correct action. This is probably because people have evolved correct behaviors, and figure out incorrect behaviors when they're smart enough to notice what they're doing but not smart enough to figure out why. But it's then interesting that their errors would be correlated rather than random.
If there were a common pattern, you could sample the opinions in a population, or over time, and estimate how much longer it would take, or how much knowledge would be needed, to arrive at a correct opinion.