lukeprog comments on The Inefficiency of Theoretical Discovery - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (109)
This is an interesting hypothesis, and one I wasn't thinking of. But hard to measure!
Out of curiosity, what gives you that impression? I tend to cite it because it is (along with the Lobian cooperation stuff) among the most important results to come out of MIRI's first couple workshops, not because I can already tell whether it's an important breakthrough in mathematical logic in general.
As for the purpose and relevance of the Lobian obstacle work, it seems like there might still be a failure of communication there. Since you and Eliezer and I discussed this at length and there still seems to be an unbridged gap, I'm not sure which thing I can say to bridge the gap. Maybe this quote from Paul?
In the OP I actually gave program equilibrium as an example of new theoretical progress that opens up new lines of inquiry, e.g. the modal agents work (though of course there are other pieces contributing to modal agents, too). So yeah, I don't think the modal agents work is an example of inefficiency.
The examples I gave in the OP for apparent inefficiency in decision theory research was philosophy's failure to formulate a reliabilist metatheory of instrumental rationality until 2013, even though reliabilist theories of epistemic rationality have been popular since the late 1960s, and also the apparently slow uptake of causal Bayes nets in the causal decision theory world.
In this very post you placed it in a list next to normative uncertainty and the intelligence explosion. The implication seemed obvious to me but perhaps it was unintended.
I seem to remember other comments / posts where similar sentiments were either expressed or implied, although a quick search doesn't turn them up, so perhaps I was wrong.
Yeah, unintended, but I can see why one might infer that.
Does my "philosophical edge" comment imply importance to you? I was merely trying to say that it's philosophical even though I'm thinking of it in terms of AI, and it's not obvious to me, like your first example, why one would read the comment as assigning particular importance to the result.
I think that the comment that I quoted is not by itself objectionable to me. If that's actually the only example I can come up with, then I think it would be unfair to criticize it, so I will update the parent to remove it.