nshepperd comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: linkhyrule5 10 September 2013 11:54:46PM 2 points [-]

I thought you wanted to persuade others.

Yes, but I don't see why this is relevant

So what do you think even happened, anyway, if you think the obvious explanation is impossible?

Ah, sorry. This brand of impossible.

Comment author: private_messaging 11 September 2013 09:02:26AM *  3 points [-]

Yes, but I don't see why this is relevant

Originally, you were hypothesising that the problem with persuading the others would be the possibility that Yudkowsky lied about AI box powers. I pointed out the possibility that this experiment is far less profound than you think it is. (Albeit frankly I do not know why you think it is so profound).

Ah, sorry. This brand of impossible.

What ever is the brand, any "impossibilities" that happen should lower your confidence in the reasoning that deemed them "impossibilities" in the first place. I don't think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.

edit: example. I would deem it quite unlikely that Yudkowsky could, for example, score highly on a programming contest with competent participants or in any other conventional, validated, reliable metric of technical expertise and ability, under good contest rules (i.e. excluding the possibility of externals assistance). So if he did something like that, I'd be quite surprised, and lower the confidence in what ever models deemed that impossible; good old Bayes. I'm far more confident in the validity of those conventional metrics (and in lack of alternate modes of passing, such as persuasion) than in my assessment so my assessment would change the most. Meanwhile, when it's some unconventional game, well, even if I thought that this game is difficult, I'd be much less confident in the reasoning "it looks hard so it must be hard" than the low prior of exceptional performance is low.

Comment author: nshepperd 12 September 2013 12:28:16PM *  1 point [-]

What ever is the brand, any "impossibilities" that happen should lower your confidence in the reasoning that deemed them "impossibilities" in the first place. I don't think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.

Further, in this case the whole purpose of the experiment was to demonstrate that an AI could "take over a gatekeeper's mind through a text channel" (something previously deemed "impossible"). As far as that goes it was, in my view, successful.

Comment author: Peterdjones 12 September 2013 12:48:15PM 0 points [-]

something previously deemed "impossible"

It's clearly possible for some values of "gatekeeper", since some people fall for 419 scams. The test is a bit meaningless without information about the gatekeepers