Will_Newsome comments on Strong intutions. Weak arguments. What to do? - Less Wrong

17 Post author: Wei_Dai 10 May 2012 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: Will_Newsome 11 May 2012 02:04:59AM 15 points [-]

(Tangent: In some cases it's possible to find a third party that understands a participant's intuitions and is willing to explain them to participants with opposing intuitions in language that everyone, including bystanders, can understand. E.g. I think that if I was involved as a translator in the Muehlhauser--Goertzel debate then at least some of Ben's intuitions would have been made clearer to Luke. Because Luke didn't quite understand Ben's position this also led some LW bystanders to get a mistaken impression of what Ben's position was, e.g. a highly upvoted comment suggested that Ben thought that arbitrary AGIs would end up human-friendly, which is not Ben's position. A prerequisite for figuring out what to do in the case of disagreeing intuitions is to figure out what the participants' intuitions actually are. I don't think people are able to do this reliably. (I also think it's a weakness of Eliezer's style of rationality in particular, and so LessWrong contributors especially might want to be wary of fighting straw men. I don't mean to attack LessWrong or anyone with this comment, just to share my impression about one of LessWrong's (IMO more glaring and important) weaknesses.))

Comment author: fiddlemath 12 May 2012 03:01:37AM 1 point [-]

This smells true to me, but I don't have any examples at hand. Do you?

This would be especially useful here, as specific examples would make it easier to think about strategies to avoid this, and maybe see if we're doing something systematically badly.

Comment author: Rain 18 May 2012 04:43:34PM *  0 points [-]

One I recall is when Eliezer was talking with the 48 Laws of Power guy on Bloggingheads.tv. He kept using the same analogy over and over, when the other guy clearly didn't get it. Eliezer has said he's specifically bad at coming up with good analogies on the spot.

Personally, I enjoy finding the proper path to bridge inferential gaps, and am quite good at it in person. I've thought a 'translation service' like the one Will recommends would be highly beneficial in such debates, but only with very skilled translators.