JoshuaZ comments on Let's make a deal - Less Wrong

-18 Post author: Mitchell_Porter 23 September 2010 12:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 24 September 2010 01:14:42PM 0 points [-]

I don't know if you mean, won't work to separate P from NP - Mulmuley's goal - or won't work on the subproblems of FAI - which is what I was proposing.

Won't work to separate P from NP.

Quantization via brane -> symplectic quotients -> partial stability.

Thanks., Reading those now.

Comment author: Mitchell_Porter 27 September 2010 02:26:58AM 1 point [-]

I should expand on this... Geometric complexity theory is about posing and solving complexity-class problems in a geometric context. For example: Computing the permanent of a matrix is in #P, computing the determinant is in P. Permanents and determinants can be thought of as algebraic subvarieties, and showing that a certain mapping cannot turn determinants into permanents (conjecture 3.2 in the third paper) would show that P is not #P. The idea is to use certain new constructions from mathematical physics (e.g. first paper) to understand such mappings. The space of orbits under the mapping is a quotient of the target space, and there is a history of using techniques from physics to understand these particular quotient spaces. The big development I see brewing is the relation between the "geometric Langlands" program and the quantization of the 5-brane in M-theory, for which a whole new approach to quantization has to be invented. So I'd like to see what happens if you take those new ideas and push them along the chain from first paper to third paper. (But that first paper is just supposed to be representative of the ongoing work, I'm not saying it specifically contains the technically appropriate concepts.)

Meanwhile, for P versus NP, there is a particular barrier to proof which Mulmuley and collaborators hope to get around because of the specialness of the varieties. Right now I'm trying to understand what it is about the algebraic-geometric facts which gives them the right logical and combinatorial properties to evade this "naturalization barrier". Then I want to look at models of CEV (e.g. for a population of DFT agents) from this perspective, to see if it will help with the difficult problems there (the interplay between self-enhancement, utility function discovery, and utility function renormalization).

Comment author: JoshuaZ 27 September 2010 01:23:22PM 1 point [-]

Another thing, although they don't discuss it, it looks like this method also might get around the relativization barrier since the associated varieties when you have an attached oracle are going to look very different.

Comment author: JoshuaZ 27 September 2010 03:11:17AM 1 point [-]

Yeah, that seems to fit with the impression I got from the papers. I'm not convinced that this can overcome the natural proof barrier but this looks more promising that other attacks I've seen. (Unfortunately this is potentially far enough from my own area of expertise that evaluating it in any great detail is probably going to be very difficult.)

Comment author: Mitchell_Porter 04 October 2010 05:54:12AM 0 points [-]

I took it to MathOverflow after Witten's latest paper. It would be crazy if string theory was the key to proving that P is not NP!