randallsquared comments on Welcome to Heaven - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (242)
We disagree if you intended to make the claim that 'our goals' are the bedrock on which we should base the notion of 'ought', since we can take the moral skepticism a step further, and ask: what evidence is there that there is any 'ought' above 'maxing out our utility functions'?
A further point of clarification: It doesn't follow - by definition, as you say - that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that 'what is valuable is what we value' tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.
Note: I took the statement 'what is valuable is what we value' to be equivalent to 'things are valuable because we value them'. The statement has another possbile meaning: 'we value things because they are valuable'. I think both are incorrect for the same reason.
I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say:
I know of no such evidence. We do act in pursuit of goals, and that's enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it's not very close at all, and I agree, but I don't see a path to closer.
So, to recap, we value what we value, and there's no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about "ought" presume a given goal both can agree on.
To the paperclip maximizer, they would certainly be valuable -- ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can't say the wirehead doesn't care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn't care about goals would never do anything at all.
I think that you are right that we don't disagree on the 'basis of morality' issue. My claim is only that which you said above: there is no objective bedrock for morality, and there's no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
I agree with the rest of your comment, and depending on how you define "goal" with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of "go left when there is a light on the right", think Braitenberg vehicles minus the evolutionary aspect.
Yes, I thought about that when writing the above, but I figured I'd fall back on the term "entity". ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
See also
Hard to be original anymore. Which is a good sign!