cousin_it comments on Open Thread: July 2010 - Less Wrong

6 Post author: komponisto 01 July 2010 09:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (653)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 05 July 2010 03:53:54PM 2 points [-]

Okay next question. Our understanding of the cellular automaton has advanced to the point where we can change one spot of Bob's world, at one specific moment in time, without being too afraid of harming Bob. It will have ripple effects and change the swamp around him slightly, though. So now we have 10^30 possible slightly-different potential futures for Bob. He will probably be happy in the overwhelming majority of them. How many should we run to fulfill our moral utility function of making sentients happy?

Comment author: SilasBarta 05 July 2010 04:21:48PM 1 point [-]

Okay, point taken. The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones. But in any case, you would be obligated to preserve re-instantiation capability of any already-created being.

Comment author: cousin_it 05 July 2010 04:31:12PM 0 points [-]

The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones.

How does yours?

Comment author: SilasBarta 05 July 2010 05:16:48PM 1 point [-]

I don't think that creation of new sentients, in and of itself, has an impact on the (my) SUF. It only has an impact to the extent that their creators value them and others disvalue such new beings.