PhilGoetz comments on Humans are utility monsters - Less Wrong

67 Post author: PhilGoetz 16 August 2013 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 09 February 2015 08:59:56PM *  0 points [-]

The actual reality does not have high level objects such as nematodes or humans.

Um... yes, it does. "Reality" doesn't conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards.

Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.

No. Utility is a thing agents have. "Utility theory" is a thing you use to compute an agent's desired action; it is therefore a thing that only intelligent agents have. Space doesn't have utility. To quote (perhaps unfortunately) Žižek, space is literally the stupidest thing there is.

Comment author: private_messaging 14 February 2015 11:50:45PM *  1 point [-]

Before one could even consider an utility of a human (or a nematode) 's existence

No. Utility is a thing agents have.

'one' in that case refers to an agent who's trying to value feelings that physical systems have.

I think there's some linguistic confusion here. As an agent valuing that there's no enormous torture camp set up in a region of space, I'd need to have an utility function over space, which gives the utility of that space.

Comment author: PhilGoetz 16 February 2015 02:12:14AM *  0 points [-]

'one' in that case refers to an agent who's trying to value feelings that physical systems have.

I see what you're doing, then. I'm thinking of a real-life limited agent like me, who has little idea how the inside of a nematode or human works. I have a model of each, and I make a guess at how to weigh them in my utility function based on observations of them. You're thinking of an ideal agent that has a universal utility function that applies to arbitrary reality.

Still, though, the function is at least as likely to start its evaluation top-down (partitioning the world into objects) as bottom-up.

I don't understand your overall point. It sounds to me like you're taking a long way around to agreeing with me, yet phrasing it as if you disagreed.

Comment author: dxu 16 February 2015 02:22:20AM *  1 point [-]

I think (and private_messaging should feel free to correct me if I'm wrong) that what private_messaging is saying is, in effect, that before you can assign utilities to objects or worldstates or whatever, you've got to be able to recognize those objects/worldstates/whatever. I may value "humans", but what is a "human"? Since the actual reality doesn't have a "human" as an ontologically fundamental category--it simply computes the behavior of particles according to the laws of physics--the definition of the "human" which I assign utility to must be given by me. I'm not going to get the definition of a "human" from the universe itself.

Comment author: PhilGoetz 16 February 2015 03:02:07AM 0 points [-]

Okay. I don't understand his point, then. That doesn't seem relevant to what I was saying.