You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ArisKatsaris comments on [LINK] Why I'm not on the Rationalist Masterlist - Less Wrong Discussion

21 Post author: Apprentice 06 January 2014 12:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (866)

You are viewing a single comment's thread. Show more comments above.

Comment author: ArisKatsaris 08 January 2014 12:17:42PM 12 points [-]

Yvain has told you in the past the following:

Could you do me a BIG FAVOR and every time you write "Yvain says..." or "Yvain believes..." in the future, follow it with "...according to my interpretation of him, which has been consistently wrong every time I've tried to use it before"? I am getting really tired of having to clean up after your constant malicious misinterpretations of me.

So everyone should be aware that whenever Dmytry/private_messaging claims Yvain said something, that's almost always wrong according to Yvain's own view of what Yvain said.

Comment author: private_messaging 12 January 2014 11:30:25AM *  -2 points [-]

The original quote from Yvain was

I suppose the difference is whether you're doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we're talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea."

Emphasis mine. In this original quote, in the hypothetical future, where Intel is building brain simulations that seem likely to become artificial general intelligence, he supports violence. As clear as it can be.

His subsequent re-formulation to make himself look less bad was:

Even Yvain supports violence of AI seems imminent". No, I might support violence if an obviously hostile unstoppable SKYNET-style AI seemed clearly imminent

Now, the caveat here is that he would use brain simulators built in the hypothetical future by Intel to be an example of "an obviously hostile unstoppable SKYNET-style AI" , a clear contradiction (if it was so obvious Intel wouldn't be making those brain emulations)