benelliott comments on How To Lose 100 Karma In 6 Hours -- What Just Happened - Less Wrong

-31 Post author: waitingforgodel 10 December 2010 08:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (214)

You are viewing a single comment's thread. Show more comments above.

Comment author: Leonhart 10 December 2010 01:56:47PM *  34 points [-]

I'm curious.

I am in the following epistemic situation: a) I missed, and thus don't know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here

Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances - is PLEASED that the topic was censored?

I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.

Of course, maybe I'm miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.

(David Gerard, I'd be grateful if you could let me know if the above trips any cultishness flags.)

Comment author: benelliott 11 December 2010 06:07:30PM 2 points [-]

I've never seen the basilisk (and I have just about resisted the very powerful urge to seek it out), but if one of us came up with a dangerous idea, is it not likely that an AI would do the same. Taking into account the vastly greater possibility of an AI to cause harm if 'infected', might we not gain more from looking at the problem now in case we can find a resolution (perhaps a better decision theory) and use that to avert a genuinely catastrophic outcome. Even if our hopes of solving the problem are not high, the probabilities and utilities may still advise it.

Of course, since I haven't seen it, I might be totally misunderstanding the situation, or maybe there is an excellent reason why the above is wrong that I can't understand without exposing myself to the basilisk. Even if this isn't the case, it might still be best for a few people who have already seen it to work on the problem, rather than informing someone like me who probably wouldn't be much help anyway.

If it's not too much trouble, could you at least sate my burning curiosity by telling me which of the three options above, if any, is correct.

Comment author: Eliezer_Yudkowsky 11 December 2010 08:24:58PM 6 points [-]

You're totally misunderstanding the situation.

Comment author: benelliott 11 December 2010 09:57:09PM 6 points [-]

Thanks.