timtyler comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 16 November 2009 07:31:11PM *  3 points [-]

There is no such thing as an "unobjectionable set of values".

Imagine the values of an agent that wants all the atoms in the universe for its own ends. It will object to any other agent's values - since it objects to the very existence of other agents - since those agents use up its precious atoms - and put them into "wrong" configurations.

Whatever values you have, they seem bound to piss off somebody.

Comment author: StefanPernar 18 November 2009 02:56:44AM -1 points [-]

There is no such thing as an "unobjectionable set of values".

And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so trivial goals to achieve after all that it would hardly require – nor value and thus seek for that mater – well thought out advice.

Comment author: timtyler 18 November 2009 08:23:05AM 2 points [-]

Alas, the first link seems almost too silly to bother with to me, but briefly:

Unobjectionable - to whom? An agent objecting to another agent's values is a simple and trivial occurrence. All an agent has to do is to state that - according to its values - it wants to use the atoms of the agent with the supposedly unobjectionable utility function for something else.

"Ensure continued co-existence" is vague and wishy-washy. Perhaps publicly work through some "trolley problems" using it - so people have some idea of what you think it means.

You claim there can be no rational objection to your preferred utility function.

In fact, an agent with a different utility function can (obviously) object to its existence - on grounds of instrumental rationality. I am not clear on why you don't seem to recognise this.