Eliezer_Yudkowsky comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (146)
Very low, of course. (Then again, relative to the perspective of nonscientists, there turned out to be a single procedure that could be used to solve all empirical problems.) But in general, problems always look much more complicated than solutions do; the presence of a host of confusions does not indicate that the set of deep truths underlying all the solutions is noncompact.
Do you think it's reasonable to estimate the amount of philosophical confusion we will have at some given time in the future by looking at the amount of philosophical confusions we currently face, and compared that to the rate at which we are clearing them up minus the rate at which new confusions are popping up? If so, how much of your relative optimism is accounted for by your work on meta-ethics? (Recall that we have a disagreement over how much progress that work represents.) Do you think my pessimism would be reasonable if we assume for the sake of argument that that work does not actually represent much progress?