ESRogs comments on Humans are utility monsters - Less Wrong

67 Post author: PhilGoetz 16 August 2013 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: ESRogs 18 August 2013 06:00:03PM 1 point [-]

Going back and re-reading ciphergoth's comment above, I now see why you're emphasizing strength of feeling. What you said makes sense, point conceded.

Comment author: PhilGoetz 19 August 2013 10:59:54PM 1 point [-]

I expect that, as we learn enough about neuroscience to begin to answer this, we'll substitute "feels more strongly" with some other criteria on which humans come out definitively on top.

Comment author: byrnema 19 August 2013 11:25:13PM *  1 point [-]

I agree, and not just because it's us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.

That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?

Comment author: [deleted] 20 August 2013 01:13:45AM *  4 points [-]

Are these utility maximizers just so social and empathetic they want everybody to be happy?

You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.

Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.

Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.

From there, you only need a few philosophical assumptions to generalize:

1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.

2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).

3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.

4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.

5) An agent might decide that it shouldn't matter how a system state came about, only what properties the system state has, e.g. it shouldn't matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)

I'm not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.