You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on The Inefficiency of Theoretical Discovery - Less Wrong Discussion

19 Post author: lukeprog 03 November 2013 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 04 November 2013 09:40:50PM 0 points [-]

many of the things that look like inefficiencies are actually trading off small local gains for large global gains

This is an interesting hypothesis, and one I wasn't thinking of. But hard to measure!

For instance, MIRI repeatedly brings up Paul's probabilistic metamathematics as an important piece of research progress produced by MIRI.

Out of curiosity, what gives you that impression? I tend to cite it because it is (along with the Lobian cooperation stuff) among the most important results to come out of MIRI's first couple workshops, not because I can already tell whether it's an important breakthrough in mathematical logic in general.

As for the purpose and relevance of the Lobian obstacle work, it seems like there might still be a failure of communication there. Since you and Eliezer and I discussed this at length and there still seems to be an unbridged gap, I'm not sure which thing I can say to bridge the gap. Maybe this quote from Paul?

No one thinks that the world will be destroyed because people built AI's that couldn't handle the Lobian obstruction. That doesn't seem like a sensible position, and I think Eliezer explicitly disavows it in the writeup. The point is that we have some frameworks for reasoning about reasoning. Those formalisms don't capture reflective reasoning, i.e. they don't provide a formal account of how reflective reasoning could work in principle. The problem Eliezer points to is an obvious problem that any consistent framework for reflective reasoning must resolve.

Working on this problem directly may be less productive than just trying to understand how reflective reasoning works in general---indeed, folks around here definitely try to understand how reflective reasoning works much more broadly, rather than focusing on this problem. The point of this post is to state a precise problem which existing techniques cannot resolve, because that is a common technique for making progress.

Another example would be decision theory of modal agents. I also won't take the time to treat this in detail, but will simply note that this work studies a form of decision theory that MIRI itself invented, and that no one else uses or studies.

In the OP I actually gave program equilibrium as an example of new theoretical progress that opens up new lines of inquiry, e.g. the modal agents work (though of course there are other pieces contributing to modal agents, too). So yeah, I don't think the modal agents work is an example of inefficiency.

The examples I gave in the OP for apparent inefficiency in decision theory research was philosophy's failure to formulate a reliabilist metatheory of instrumental rationality until 2013, even though reliabilist theories of epistemic rationality have been popular since the late 1960s, and also the apparently slow uptake of causal Bayes nets in the causal decision theory world.

Comment author: jsteinhardt 05 November 2013 06:00:13AM *  1 point [-]

Out of curiosity, what gives you that impression? I tend to cite it because it is (along with the Lobian cooperation stuff) among the most important results to come out of MIRI's first couple workshops, not because I can already tell whether it's an important breakthrough in mathematical logic in general.

In this very post you placed it in a list next to normative uncertainty and the intelligence explosion. The implication seemed obvious to me but perhaps it was unintended.

I seem to remember other comments / posts where similar sentiments were either expressed or implied, although a quick search doesn't turn them up, so perhaps I was wrong.

Comment author: lukeprog 05 November 2013 06:57:27AM 1 point [-]

The implication seemed obvious to me but perhaps it was unintended.

Yeah, unintended, but I can see why one might infer that.

Does my "philosophical edge" comment imply importance to you? I was merely trying to say that it's philosophical even though I'm thinking of it in terms of AI, and it's not obvious to me, like your first example, why one would read the comment as assigning particular importance to the result.

Comment author: jsteinhardt 05 November 2013 05:02:56PM 0 points [-]

I think that the comment that I quoted is not by itself objectionable to me. If that's actually the only example I can come up with, then I think it would be unfair to criticize it, so I will update the parent to remove it.