You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

blacktrance comments on What are your contrarian views? - Less Wrong Discussion

10 Post author: Metus 15 September 2014 09:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (806)

You are viewing a single comment's thread.

Comment author: blacktrance 15 September 2014 07:20:31PM *  44 points [-]

[Please read the OP before voting. Special voting rules apply.]

Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.

Comment author: VAuroch 17 September 2014 07:44:16PM 1 point [-]

What would you have to see to convince you otherwise?

Comment author: blacktrance 17 September 2014 09:09:48PM 0 points [-]

I think it would take an a priori philosophical argument, rather than empirical evidence.

Comment author: [deleted] 26 September 2014 01:23:23AM 0 points [-]

Wouldn't cognitive science or neuroscience be sufficient to falsify such a theory? All we really have to do is show that "good life", as seen from the inside, does not correspond to maximized happy-juice or dopamine-reward.

Comment author: [deleted] 26 September 2014 01:32:41AM 0 points [-]

I can think of something I prefer, on reflection, against wireheading. Now what?

Comment author: blacktrance 26 September 2014 02:17:43AM -2 points [-]

There's a lot of things that people are capable of preferring that's not pleasure, the question is whether it's what they should prefer.

Comment author: Transfuturist 15 November 2014 09:29:57PM 2 points [-]

Awfully presumptuous of you to tell people what they should prefer.

Comment author: blacktrance 15 November 2014 10:53:37PM -1 points [-]

Why? We do this all the time, when we advise people to do something different from what they're currently doing.

Comment author: Transfuturist 16 November 2014 02:00:15AM 2 points [-]

No, we don't. That's making recommendations as to how they can attain their preferences. That you don't seem to understand this distinction is concerning. Instrumental and terminal values are different.

Comment author: blacktrance 16 November 2014 08:00:05PM -2 points [-]

My position is in line with that - people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.

Comment author: Transfuturist 26 November 2014 01:20:49AM 1 point [-]

Why is my terminal value pleasure? Why should I want it to be?

Comment author: blacktrance 26 November 2014 01:43:33AM *  0 points [-]

Fundamentally, because pleasure feels good and preferable, and it doesn't need anything additional (such as conditioning through social norms) to make it desirable.

Comment author: Transfuturist 26 November 2014 03:05:20AM *  -1 points [-]

Why should I desire what you describe? What's wrong with values more complex than a single transistor?

Also, naturalistic fallacy.

Comment author: DefectiveAlgorithm 26 November 2014 01:49:08AM *  -1 points [-]

Can you define 'terminal values', in the context of human beings?

Comment author: blacktrance 26 November 2014 03:42:01AM 0 points [-]

Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.

Comment author: DefectiveAlgorithm 26 November 2014 11:24:11AM *  0 points [-]

I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings - that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I'm well aware of the huge impact my emotional subsystem has on my decision making. However, I don't consider it 'me' - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.

To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?