wedrifid comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 04 October 2010 04:38:46AM 3 points [-]

I want to upvote each of these points a dozen times. Then another few for the first.

A Singleton AI is not a stable equilibrium

It's the most stable equilibrium I can conceive of. ie. More stable than if all evidence of life was obliterated from the universe.

Comment author: mattnewport 04 October 2010 04:53:52AM 2 points [-]

I guess I'm playing the game right then :)

I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.

Comment author: Mass_Driver 06 October 2010 05:59:03AM 3 points [-]

Funny you should mention it; that's exactly what I was thinking. I have a friend (also named matt, incidentally) who I strongly believe is guilty of motivated cognition about the desirability of a singleton AI (he thinks it is likely, and therefore is biased toward thinking it would be good) and so I leaped naturally to the ad hominem attack you level against yourself. :-)

Comment author: wedrifid 04 October 2010 06:26:32AM 1 point [-]

I'm curious, do you also think that a singleton is a desirable outcome? It's possible my thinking is biased because I view this outcome as a dystopia and so underestimate it's probability due to motivated cognition.

Most of them, no. Some, yes. Particularly since the alternative is the inevitable loss of everything that is valuable to me in the universe.

Comment author: Will_Newsome 04 October 2010 10:12:02PM 5 points [-]

This is incredibly tangential, but I was talking to a friend earlier and I realized how difficult it is to instill in someone the desire for altruism. Her reasoning was basically, "Yeah... I feel like I should care about cancer, and I do care a little, but honestly, I don't really care." This sort of off-hand egoism is something I wasn't used to; most smart people try to rationalize selfishness with crazy beliefs. But it's hard to argue with "I just don't care" other than to say "I bet you will have wanted to have cared", which is gramatically horrible and a pretty terrible argument.

Comment author: Jordan 05 October 2010 03:04:54AM 6 points [-]

I respect blatant apathy a whole hell of a lot more than masked apathy, which is how I would qualify the average person's altruism.