tut comments on Welcome to Heaven - Less Wrong

23 Post author: denisbider 25 January 2010 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kutta 26 January 2010 11:08:23AM *  2 points [-]

The FAI can make you feel as though you "think things and do stuff", just by changing your preferences.

I can't see how a true FAI can change my preferences if I prefer them not being changed.

Anyway, can you explain why you are attached to your preferences? That "it's better to value this than value that" is incoherent, and the FAI will see that. The FAI will have no objective, logical reason to distinguish between values you currently have and are attached to and values that you could have and be attached to, and might as well modify you than modify the universe. (Because the universe has exactly the same value either way.)

It does not work this way. We want to do what is right, not what would conform our utility function if we were petunias or paperclip AIs or randomly chosen expected utility maximizers; the whole point of Friendliness is to find out and implement what we care about and not anything else.

I'm not only attached to my preferences; I am great part my preferences. I even have a preference such that I don't want my preferences to be forcibly changed. Thinking about changing meta-preferences quickly leads to a strange loop, but if I look at specific outcome (like me being turned to orgasmium) I can still make a moral judgement and reject that outcome.

The FAI will have no objective, logical reason to distinguish between values you currently have and are attached to and values that you could have and be attached to, and might as well modify you than modify the universe. (Because the universe has exactly the same value either way.)

The FAI has a perfectly objective, logical reason to do what's right and not else; its existence and utility function is causally retractable to the humans that designed it. An AI that verges on nihilism and contemplates switching humanity's utility function to something else, partly because the universe has the "exactly same value" either way, is definitely NOT a Friendly AI.

Comment author: tut 26 January 2010 11:33:00AM 0 points [-]

...a perfectly objective, ... reason ...

How do you define this term?

Comment author: Kutta 26 January 2010 11:50:40AM *  0 points [-]

"Reason" here: a normal, unexceptional instance of cause and effect. It should be understood in a prosaic way, e.g. reason in a causal sense.

As for "objective", I borrowed it from the parent post to illustrate my point. To expand on "objective" a bit: everything that exists in physical reality is, and our morality is as physical and extant as a brick (via our physical brains), so what sense does it make to distinguish between "subjective" and "objective," or to refer to any phenomena as "objective" when in reality it is not a salient distinguishing feature.

If anything is "objective", then I see no reason why human morality is not, that's why I included the word in my post. But probably the best would be to simply refrain from generating further confusion by the objective/subjective distinction.

Comment author: tut 26 January 2010 12:25:14PM *  1 point [-]

Reason is not the same as cause. Cause is whatever brings something about in the physical world. Reason is a special kind of cause for intentional actions. Specifically a reason for an action is a thought which convinces the actor that the action is good. So an objective reason would need an objective basis for something being called good. I don't know of such a basis, and a bit more than a week ago half of the LW readers were beating up on Byrnema because she kept talking about objective reasons.

Comment author: Kutta 26 January 2010 05:46:11PM 0 points [-]

OK then, it was a misuse of the word from my part. Anyway, I'd never intend a teleological meaning for reasons discussed here before.