Psychohistorian comments on Exterminating life is rational - Less Wrong

17 Post author: PhilGoetz 06 August 2009 04:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 07 August 2009 01:03:54AM *  1 point [-]

It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I'm selfish; I don't care about what other people want or think.

Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:

If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn't. It's a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it's already in there.

In fact, I have already explained my usage of the word "selfish" to you in this same context, repeatedly, in a different post.

Psychohistorian wrote:

Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind "Rational agents are/are not selfish" is a type error; selfishness is entirely orthogonal to rationality.

I quote myself again:

If you act in the interest of others because it's in your self-interest, you're selfish. Rational "agents" are "selfish", by definition, because they try to maximize their utility functions. An "unselfish" agent would be one trying to also maximize someone else's utility function. That agent would either not be "rational", because it was not maximizing its utiltity function; or it would not be an "agent", because agenthood is found at the level of the utility function.

Comment author: Psychohistorian 07 August 2009 02:43:44AM 1 point [-]

Rational agents incorporate the benefits to others into their utility functions.

as a section header may have thrown me off there.

That aside, I do understand what you're saying, and I did notice the original contrast between the 1%/1%. Though I'd note it doesn't follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that's not an even bet.

The whole arational point is my mistake; the whole paragraph:

But maybe they're just not as rational as you...

reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn't matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.

Comment author: PhilGoetz 07 August 2009 03:42:04AM *  3 points [-]

and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail

That's why what I wrote in that section was:

it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.

You wrote:

But this doesn't matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.

I am supposing that. That's why it's in the title of the post. I don't mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that's the implication.