Wei_Dai comments on SUGGEST and VOTE: Posts We Want to Read on Less Wrong - Less Wrong

15 Post author: lukeprog 07 February 2011 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (102)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 09 February 2011 01:11:28AM *  0 points [-]

Thanks, that's actually much clearer to me.

You know why you asked about the phenomenon you had in mind with the question, thus "unasking" the question.

But can't that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I'm to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I'm human and flawed).

I'm worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don't understand it, but we do understand the heuristics.

Do you understand my worry, and if so, do you think it applies here?

Comment author: SilasBarta 09 February 2011 04:13:06PM 0 points [-]

I think I understand your worry: you think there's a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.

I differ in that I think it's the reverse: truth always "cashes out" as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.

To put it another way, what if the model you were given performs perfectly? Would you have any worry that, "okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing ... but what if it's just tricking me? This might not all be really true." I would say at that point, you have your priorities reversed: if something fails at being "truth" but can perform that well, this "non-truth" is no longer something you should care about.