blacktrance comments on What are your contrarian views? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (806)
[Please read the OP before voting. Special voting rules apply.]
Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.
What would you have to see to convince you otherwise?
I think it would take an a priori philosophical argument, rather than empirical evidence.
Wouldn't cognitive science or neuroscience be sufficient to falsify such a theory? All we really have to do is show that "good life", as seen from the inside, does not correspond to maximized happy-juice or dopamine-reward.
I can think of something I prefer, on reflection, against wireheading. Now what?
There's a lot of things that people are capable of preferring that's not pleasure, the question is whether it's what they should prefer.
Awfully presumptuous of you to tell people what they should prefer.
Why? We do this all the time, when we advise people to do something different from what they're currently doing.
No, we don't. That's making recommendations as to how they can attain their preferences. That you don't seem to understand this distinction is concerning. Instrumental and terminal values are different.
My position is in line with that - people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.
Why is my terminal value pleasure? Why should I want it to be?
Fundamentally, because pleasure feels good and preferable, and it doesn't need anything additional (such as conditioning through social norms) to make it desirable.
Why should I desire what you describe? What's wrong with values more complex than a single transistor?
Also, naturalistic fallacy.
Can you define 'terminal values', in the context of human beings?
Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.
I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings - that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I'm well aware of the huge impact my emotional subsystem has on my decision making. However, I don't consider it 'me' - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.
To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?