HalFinney comments on Agree, Retort, or Ignore? A Post From the Future - Less Wrong

35 Post author: Wei_Dai 24 November 2009 10:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread.

Comment author: HalFinney 24 November 2009 10:54:43PM 9 points [-]

I agree about the issue of unresolved arguments. Was agreement reached and that''s why the debate stopped? No way to tell.

Particularly the epic AI-foom debate between Robin and Eliezer on OB, over whether AI or brain simulations were more likely to dominate the next century, was never clearly resolved with updated probability estimates from the two participants. In fact probability estimates were rare in general. Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.

BTW sorry to see that linkrot continues to be a problem in the future.

Comment author: Wei_Dai 26 November 2009 11:28:47PM 6 points [-]

I took the liberty of a creating a wiki page about the AI-foom debate, with links to all of the posts collected in one place, in case anyone wants to refer to it in the future.

Comment author: Wei_Dai 26 November 2009 07:56:16AM *  1 point [-]

Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.

I find myself reluctant to support this idea. I think the main reason is that it seems very hard to translate my degrees of belief into probability numbers. So I'm afraid that I'll update my beliefs correctly in response to other people's arguments, but state the wrong numbers. Is this a skill that we can learn to perform better?

Right now I just try to indicate my degrees of belief using English words, like "I'm sure", "I think it's likely", "perhaps", etc., which has the disadvantage of not being very precise, but the advantage of requiring little mental effort (which I can redirect into for example thinking about whether an argument is correct or not).

ETA: It does seem that there are situations where the extra mental effort required to state probability estimates would be useful, like in the AI-Foom debate, where there is persistent disagreement after an extensive discussion. The disputants can perhaps use probability estimates to track down which individual beliefs (e.g., conditional probabilities) are causing their overall disagreement.

Comment author: wedrifid 25 November 2009 08:29:39AM *  1 point [-]

Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.

Would that be desirable? I know, for example, that when reading Robin's posts on that topic I often updated away from Robin's position (weak arguments from a strong debater is evidence that there are not stronger arguments). Given this possibility, having public numbers diverging in such a way would be rather dramatic and decidedly favour dishonesty.

In general there are just far too many signalling reasons to avoid having 'probability estimates' public. Very few discussions even here are sufficiently rational as to make those numbers beneficial.

Comment author: matt 25 November 2009 07:33:03PM *  0 points [-]

When your estimates are tracked (which was the purpose of predictionbook.com [disclaimer: financial interest]) it becomes much harder to signal with them without blowing your publicly visible calibration.

Comment author: wedrifid 26 November 2009 02:47:29AM 0 points [-]

It does. Of course, given that I was primed with the 'AI-foom' debate I found the thought of worrying what people will think of your calibration a little amusing. :)