Vladimir_Nesov comments on How To Lose 100 Karma In 6 Hours -- What Just Happened - Less Wrong

-31 Post author: waitingforgodel 10 December 2010 08:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (214)

You are viewing a single comment's thread. Show more comments above.

Comment author: Leonhart 10 December 2010 01:56:47PM *  34 points [-]

I'm curious.

I am in the following epistemic situation: a) I missed, and thus don't know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here

Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances - is PLEASED that the topic was censored?

I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.

Of course, maybe I'm miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.

(David Gerard, I'd be grateful if you could let me know if the above trips any cultishness flags.)

Comment author: Vladimir_Nesov 10 December 2010 03:21:31PM *  5 points [-]

It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses

I'm certain that the forbidden topic couldn't possibly hurt me (probability of that is zilch). Still, I agree that from what we know, considering it should be discouraged, based on an expected utility argument (it either changes nothing or hurts tremendously with tiny probability, but can't correspondingly help tremendously because human value is a narrow target). Don't confuse these two arguments.

(I think this is my best summary of the shape of the argument so far.)

Comment author: Psy-Kosh 10 December 2010 05:01:44PM *  14 points [-]

(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my "being bugged" by the removal.)

The thing that's been bugging me about this whole issue is even given that a certain piece of information MAY (with really tiny probability) be highly (for lack of a better word), toxic... should we as humans really be in the habit of "this seems like dangerous idea, don't think about it"?

I can't help but think this must violate something analogous (though not identical) to an ethical injunction. ie, chances of human encountering inherently toxic idea are so small vs cost of smothering one's own curiosity/allowing censorship not due to trollishness or even revelation of technical details that could be used to do really dangerous thing, but simply because it is judged dangerous to even think about...

I get why this was perhaps a very particular special circumstance, but am still of several minds about this one. "Don't think about deliciously forbidden dangerous idea, just don't", even if perhaps actually is indicated in certain very unusual special cases, seems like the sort of thing that one would, as a human, want injunctions against.

Again, I'm of several minds on this however.

(EDIT: Just to clarify, that does not mean that I in any way approve of "existential threat blackmail" or that I'm even of two minds about that. That's just epically stupid)

Comment author: David_Gerard 11 December 2010 03:19:11PM 1 point [-]

(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my "being bugged" by the removal.)

Yeah, that was the reason that convinced me its removal from here was a good enough idea to bother enacting. I wouldn't try removing it from the net, but due warning is appropriate. Such things attract curious monkeys to test the wet paint - but! I still haven't seen 2 Girls 1 Cup and have no plans to! So it's not assured.

Comment author: Strange7 06 February 2012 06:19:34AM 0 points [-]

I've seen it. It's not really as interesting as the hype would suggest.

Comment deleted 11 December 2010 03:03:21AM *  [-]
Comment author: Broggly 14 December 2010 07:42:12PM 2 points [-]

Really? That seems odd. It would be pretty silly for it to affect those who don't know about it. That would just be pointless.

Comment deleted 15 December 2010 05:36:25AM [-]
Comment author: JoshuaZ 15 December 2010 05:58:11AM 2 points [-]

Wow, that's even more impressive than the claim made by some Christian theologians that part of the enjoyment in heaven is getting to watch the damned be tormented. If any AI thinks anything even close to this then we have failed Friendliness even more than if we made a simple object maximizer.

Comment author: Eugine_Nier 15 December 2010 06:28:25AM 4 points [-]

Next thing you're going to tell me that an FAI shouldn't push fat people in front of trolleys.

Note: A sufficiently powerful FAI shouldn't need to, but that is different from saying it wouldn't.