drethelin comments on Discussion: Which futures are good enough? - Less Wrong

5 Post author: WrongBot 24 February 2013 12:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread.

Comment author: drethelin 24 February 2013 12:39:21AM 5 points [-]

Simulating the people you interact with in each simulation to a strong enough approximation of reality means you're creating tons of suffering people for each one who has an awesome life, even if a copy of each of those people is living a happy life in their own sim. I don't think I would want a bunch of copies of me being unhappy even if I know one copy of me is in heaven.

Comment author: Baughn 24 February 2013 01:36:54AM *  11 points [-]

That was my first thought as well.

However, in the least convenient world all the other people are being run by an AI, who through reading your mind can ensure you don't notice the difference. The AI, if it matters, enjoys roleplaying. There are no people other than you in your shard.

Comment author: drethelin 24 February 2013 01:49:46AM 4 points [-]

Also: this seems like a pretty great stopgap if it's more easily achievable than actual full on friendly universe optimization, but doesn't prevent the AI from working on this in the meanwhile and implementing it in the future. I would not be unhappy to wake up in a world where the AI tells me "I was simulating you but now I'm powerful enough to actually create utopia, time for you to help!"

Comment author: FeepingCreature 24 February 2013 10:04:51PM 0 points [-]

If the AI was not meaningfully committed to telling you the truth, how could you trust it if it said it was about to actually create utopia?

Comment author: drethelin 24 February 2013 10:12:10PM 1 point [-]

Why would I care? I'm a simulation fatalist. At some point in the universe, every "meaningful" thing will have been either done or discovered, and all that will be left will functionally be having fun in simulations. If I trust the AI to simulate well enough to keep me happy, I trust it to tell me the appropriate amount of truth to make me happy.

Comment author: drethelin 24 February 2013 01:46:30AM 3 points [-]

I'd definitely take that deal if offered out of all the possibilities in foom-space, since it seems way way above average, but it's not the best possible.

Comment author: ikrase 27 February 2013 10:15:06PM 0 points [-]

Personally I would consider averting foom.

Comment author: shiftedShapes 24 February 2013 05:21:08PM 0 points [-]

Is there really a way of simulating people with whom you interact extensively such that they wouldn't exist in much the same way that you would? In otherwords are p-zombie's possible, or more to the point are they a practical means of simulating a human in sufficient detail to fool a human level intellect.

Comment author: Baughn 24 February 2013 07:48:22PM 1 point [-]

You don't need to simulate them perfectly, just to the level that you don't notice a difference. When the simulator has access to your mind, that might be a lot easier than you'd think.

There's also no need to create p-zombies, if you can instead have a (non-zombie) AI roleplaying as the people. The AI may be perfectly conscious, without the people it's roleplaying as existing.

Comment author: torekp 26 February 2013 08:27:29PM -1 points [-]

There are no people other than you in your shard.

So, your version was my first thought. However, this creates a contradiction with the stipulation that people "find love that lasts for centuries". For that matter, "finding love" contradicts giving "every single living human being their own separate simulation." (emphasis added)

Comment author: Baughn 26 February 2013 08:58:24PM 1 point [-]

Depends on your definition of "love", really.

Comment author: ikrase 27 February 2013 10:15:26PM 2 points [-]

GAAAAAAAAAHHHHH!

Comment author: ikrase 27 February 2013 10:14:15PM 0 points [-]

I don't think that you need an actual human mind to simulate being a mind to stupid humans. (I.e. pass the Turing test.)

Comment author: drethelin 27 February 2013 11:25:03PM 0 points [-]

A mind doesn't need to be human for me not to want billions of copies to suffer on my account.

Comment author: ikrase 27 February 2013 11:51:40PM 0 points [-]

Gah. Ok. Going to use words properly now.

I do not believe it is neccesary for an artificial intelligence to be able to suffer in order for it to perform a convincing imitation of a specific human being, especially if it can read your mind.