JonatasMueller comments on Arguments against the Orthogonality Thesis - Less Wrong

-7 Post author: JonatasMueller 10 March 2013 02:13AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 10 March 2013 04:17:11AM *  1 point [-]

I find this post to be too low quality to support even itself, let alone stand up against the orthogonality thesis (on which I have no opinion). It needs a complete rewrite at best. Some (rather incomplete) notes are below.

This is either because the beings in question have some objective difference in their constitution that associates them to different values, or because they can choose what values they have.

Where do you include environmental and cultural influences?

If they differ in other values, given that they are constitutionally similar, then the differing values could not be all correct at the same time, they would be differing due to error in choice.

This does not follow. Maybe you need to give some examples. What do you mean by "correct" and "error" here?

What is important is the satisfaction, or good feelings, that they produce, in the present or future (what might entail life preservation), which is basically the same thing to everyone.

This is a contentious attempt to convert everything to hedons. People have multiple contradictory impulses, desires and motives which shape their actions, often not by "maximizing good feelings".

They don't regularly put their hands into boiling water to feel the pain

Really? Been to the Youtube and other video sites lately?

There is a difference between valid and invalid human values, which is the ground of justification for moral realism: valid values have an epistemological justification, while invalid ones are based on arbitrary choice or intuition.

This sounds like a pronouncement of absolute truth, not a description of one of many competing models. It is not clear that the "epistemological justification" is a good definition of the term "valid".

We could, theoretically, be living inside virtual worlds in an underlying alien universe with different physical laws and scientific facts, but we can nonetheless be sure of the reality of our conscious experiences in themselves, which are directly felt.

This is wrong in so many ways, unless you define reality as "conscious experiences in themselves", which is rather non-standard. In any case, unless you are a dualist, you can probably agree that your conscious experiences can be virtual as much as anything else.

Good and bad feelings (or conscious experiences) are physical occurrences, and therefore objectively good and bad occurrences, and objective value.

Again, you use the term objective for feelings and conscious experiences, not something easily measured and agreed upon to be in any way objective, certainly no more than the "external world"

The existence of personal identities is purely an illusion that cannot be justified by argument, and clearly disintegrates upon deeper analysis (for why that is, see, e.g., this essay: Universal Identity).

Uhh, that post sucked as well.

Kinda stopped reading after that, no point really. Please consider learning the material before writing about it next time. Maybe read a Sequence or two, can't hurt, can it?

Comment author: JonatasMueller 10 March 2013 04:37:02AM *  3 points [-]

Where do you include environmental and cultural influences?

While these vary, I don't see legitimate values that could be affected by them. Could you provide examples of such values?

This does not follow. Maybe you need to give some examples. What do you mean by "correct" and "error" here?

Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value. I mean error in the sense, for example, that Eliezer employs in Coherent Extrapolated Volition: that error that comes from insufficient intelligence in thinking about our values.

This is a contentious attempt to convert everything to hedons. People have multiple contradictory impulses, desires and motives which shape their actions, often not by "maximizing good feelings".

Except in the aforementioned sense or error, could you provide examples of legitimate values that don't reduce to good and bad feelings?

Really? Been to the Youtube and other video sites lately?

I think that literature about masochism is of more evidence than youtube videos, that could be isolated incidents of people who are not regularly masochist. If you have evidence from those sites, I'd like to see it.

This is wrong in so many ways, unless you define reality as "conscious experiences in themselves", which is rather non-standard. In any case, unless you are a dualist, you can probably agree that your conscious experiences can be virtual as much as anything else.

Even being virtual, or illusive, they would still be real occurrences, and real illusions, being directly felt. I mean that in the sense of Nick Bostrom's simulation argument.

Uhh, that post sucked as well.

Perhaps it was not sufficiently explained, but check this introduction on Less Wrong, then, or the comment I made below about it:

http://lesswrong.com/lw/19d/the_anthropic_trilemma/

I read many sequences, understand them well, and assure you that, if this post seems not to make sense, then it is because it was not explained in sufficient length.

Comment author: aleksiL 10 March 2013 08:03:45AM 1 point [-]

Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value.

The two can't be perfectly identical if they disagree. You have to additionally assume that the discrepancy is in the parts that reason about their values instead of the values themselves for the conclusion to hold.

Comment author: JonatasMueller 10 March 2013 08:37:38AM 0 points [-]

What if I changed the causation chain in this example, and instead of having the antagonistic values caused by the identical agents themselves, I had myself inserted the antagonistic values in their memories, while I did their replication? I could have picked the antagonistic value from the mind of a different person, and put it into one of the replicas, complete with a small reasoning or justification in its memory.

They would both wake up, one with one value in their memory, and another with an antagonistic value. What would it be that would make one of them correct and not the other? Could both values be correct? The issue here is questioning if any values whatsoever can be validly held for similar beings, or if a good justification is needed. In CEV, Eliezer proposed that we can make errors about our values, and that they should be extrapolated for the reasonings we would make if we had higher intelligence.