Suppose someone offered you a bet on P = NP. How should you go about deciding whether or not to take it?
Does it make sense to assign probabilities to unproved mathematical conjectures? If so, what is the probability of P = NP? How should we compute this probability, given what we know so far about the problem? If the answer is no, what is a rational way to deal with mathematical uncertainty?
It only took 7 years to make substantial progress on this problem: Logical Induction by Garrabrant et al..
Well, I'm pretty sure that the smart money is all on one side of the question because of certain heuristic evidence that becomes very clear when you've worked with these complexity classes for a long while.
It's the same reason the smart money was on Fermat's Last Theorem being true (prior to the proof being found); not only would it have been very unusual in mathematics for this simple Diophantine equation to have its first nontrivial solution only when the numbers became absurdly large, but it is equivalent to a beautiful conjecture in elliptic curves which seemed to admit of no counterexamples.
There's plenty of inductive evidence found and used in the actual doing of mathematics; it just doesn't make the textbooks (since you generally don't publish on a subject unless you've found a proof, at which point the inductive evidence is utterly redundant). Yet it guides the intuition of all mathematicians when they decide what to try proving next.
Compare Einstein's Arrogance.
Suppose you test Fermat's Last Theorem for n up to 10^10, and don't find a counterexample. How much evidence does that give you for FLT being true? In other words, how do you compute P(a counterexample exists with n<=10^10 | FLT is false), since that's what's needed to do a Bayesian update with this inductive evidence? (Assume this is before the proof was found.)
I don't dispute that mathematicians do seem to reason in ways that are similar to using probabilities, but I'd like to know where these "probabilities" are coming from and whether the reasoning process really is isomorphic to probability theory. What you call "heuristic" and "intuition" are the results of computations being done by the brains of mathematicians, and it would be nice to know what the algorithms are (or should be), but we don't have them even in an idealized form.
Is there a best language in which to express complexity for use in the context of Occam's razor.
If there is a best language in which to express complexity for use in the context of Occam's razor, what is that language?
Here are some more problems that have come up on LW:
What does it mean to deal rationally with moral uncertainty? If Nick Bostrom and Toby Ord's solution is right, how do we apply it in practice?
ETA: this isn't a "clearly defined" question in the sense you mentioned, but I'll leave it up anyway; apologies
Here's an open problem that's been on my mind this past week:
Take some controversial question on which there are a small number of popular opinions. Draw a line going from 0 on the left to 1 on the right. Divide that line into segments for each opinion that holds > 1% of opinion-space.
Now stratify the population by IQ into 10-point intervals. Redo the process, drawing a new line from 0 to 1 for each IQ range and dividing it into segments. Then stack your 0-1 line segments up vertically. Connect the sections for the same opinions in each IQ group.
Wha...
Another problem, although to what degree it is currently both "open" and relevant is debatable: finding new loopholes in Arrow's Theorem.
Can you even have a clearly defined problem in any field other than Mathematics, or one that doesn't reduce to a mathematical problem regardless of the field where it originated?
I've been lurking here for a while now and thought I had something to add for the first time, so Hi all, thanks for all the great content and concepts; on to the good stuff:
I think a good open problem for the list would be: a formal (or a good solid) defintion of rationality. I know of things like BDI architecture and pareto optimality but how do these things apply to a human rational being. For that matter, how do you reason logically/formally about a human being? What would be a good abstraction/structure, are there any guidelines?
well, just my 2 formal cents.
compare to: http://lesswrong.com/lw/x/define_rationality/
Well, I can give some classes of problems. For instance, many of the biases that we know about, we don't really know good ways for humans to reliably correct for. So right there is a whole bunch of open problems. (I know of some specific ones with known debiasing techniques, but many are "okay, great, I know I make this error... but other than occasionally being lucky enough to directly catch myself in the act of doing so, it's not really obvious how to correct for these")
Another, I guess vaguer one would be "general solution that allows people to solve their akrasia problems"
How do we handle the existence of knowledge which is reliable but cannot be explained? As an example, consider the human ability to recognize faces (or places, pieces of music, etc). We have nearly total confidence in our ability to recognize people by their faces (given enough time, good lighting, etc). However, we cannot articulate the process by which we perform face recognition.
Imagine you met a blind alien, and for whatever reason needed to convince it of your ability to recognize people by face. Since you cannot provide a reasonable description of y...
This one's well-known in certain quarters (so it's not really open), but it may provide training for those unfamiliar with it.
Suppose that you observe two random samples from the uniform distribution centered at unknown location c with width 1. Label the samples x_max and x_min. The random interval (x_min, x_max) is a 50% confidence interval for c because it contains c* with 50% probability.
* changed from "deviates" per comment below
This does somewhat dodge the question, but it does make a difference that an infinite set of counterexamples can be associated with each counterexample. That is, if (a,b,c,n) is not a solution to the Fermat equation, then (ka,kb,kc,n) isn't either for any positive integer k.
This is not a game question, but it may be an interesting question regarding decision making for humans:
What is the total Shannon entropy of the variables controlling whether or not a human will do what it consciously believes will lead to the most desirable outcome?
If all humans currently alive collectively represent every possible variable combination in this regard, the maximum value for the answer is 32.7 bits[1]. That is, 33 on/off switches completely decide whether or not you will put off doing your homework[2]. Is the correct value higher or lower...
How do you build a smart, synthetic goal-seeking agent? I believe there are some associated problems as well.
Well, this is unspeakably late, but better late than never.
I was beginning to think in all seriousness that none of you would ever begin asking what questions you should be asking. It's nice to be proven wrong.
Open problems are clearly defined problems1 that have not been solved. In older fields, such as Mathematics, the list is rather intimidating. Rationality, on the other, seems to have no list.
While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?
The catch is that these problems cannot be inherently fuzzy problems. "How do I become less wrong?" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?
See also: Open problems at LW:Wiki
1: "Clearly defined" essentially means a formal, unambiguous definition. "Solving" such a problem would constitute a formal proof.