kokotajlod comments on Weighting the probability of being a mind by the quantity of the matter composing the computer that calculates that mind - Less Wrong

0 Post author: yttrium 11 February 2014 03:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: kokotajlod 11 February 2014 06:13:40PM *  1 point [-]

What if, instead of a paperclip-maximizer, we had a machine that was designed to maximize the amount of machine-pleasure in the world, where machine-pleasure is "the firing of a certain reward circuit in a system that is sufficiently similar to myself."

Then it seems yttrium has a point: it is all going to come down to when the machine decides there are two systems in the world, and when it decides there is only one. And there is no "obvious" choice for the machine to make in this regard.

Edit: And so, if we want to make an AI that maximizes (among other things) certain subjective human experiences, we will have to make sure it doesn't come to some sort of crazy conclusion about what that entails.

I've followed you, Manfred, in framing this question in terms of values and right actions. But the original question was framed in terms of expectations and future experiences. Do you think that the original question doesn't make sense, or do you have something to say about the original formulation as well? I myself am on the fence.

Comment author: Manfred 11 February 2014 08:19:18PM *  2 points [-]

If you make a robot that explicitly cares about things differently depending on how heavy it is, then sure, it can take actions as if it cared about things more when it's heavier.

But that is done using the same probabilities as normal, merely a different utility function. Changing your utilities without changing your probabilities has no impact on the "probability of being a mind."

Comment author: kokotajlod 12 February 2014 01:54:21AM 0 points [-]

We don't have to program the machine to explicitly care about things differently depending on how heavy they are. Instead, we program the machine to care simply about how many systems exist--but wait! It turns out it we don't know what we mean by that! According to yttrium.