You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on AI cooperation is already studied in academia as "program equilibrium" - Less Wrong Discussion

35 Post author: cousin_it 30 July 2012 03:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 30 July 2012 08:21:58PM *  1 point [-]

One simple heuristic formula for approximate problem difficulty I heard is this:

D = log(T)*N

D is difficulty, T is length of time the problem stayed open after being posed, N is the average number of people working on it.

Rationalle:

Since we expect P != NP, finding proofs is an exponential search problem. That means in time T we can only cover problems that have about log(T) shortest proof length. Assuming linear scaling with adding more processing, we have a factor of N (in reality, scaling is sublinear, but this is a heuristic..).

The point is that just because a field has open problems, doesn't mean those problems are necessarily very hard. It could be that either they haven't been open very long, or not enough people care.

Comment author: cousin_it 30 July 2012 09:29:26PM *  2 points [-]

If D is the proof length, surely your reasoning should lead to something like log(T*N) rather than log(T)*N? It seems to me that N people working for T days is T*N person-days, so the two variables should be mostly exchangeable, it's weird to see only one of them under the logarithm...

Comment author: [deleted] 31 July 2012 12:03:03AM 1 point [-]

Since we expect P != NP, finding proofs is an exponential search problem.

P != NP does not imply that one can't do better than an exponential search for finding proofs. For instance an O(n^log(n)) algorithm for finding proofs would not contradict P != NP.