CronoDAS comments on Open Thread: October 2009 - Less Wrong

5 Post author: gwern 01 October 2009 12:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (425)

You are viewing a single comment's thread. Show more comments above.

Comment author: CronoDAS 03 October 2009 08:58:21AM 1 point [-]

"Utilons" are a stand-in for "whatever it is you actually value". The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.

See also: Coherent Extrapolated Volition

Comment author: AndrewKemendo 03 October 2009 02:04:32PM 0 points [-]

Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature - as the questioner points out.

It is understood that the impact of an AI will be on all in humanity regardless of it's implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a "utility" calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).

The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.

From the people (researchers) I have talked to about this specifically, the responses I have gotten are: "I'm not interested in that, I want to know how intelligence works" or "I just want to make it work, I'm interested in the science behind it." And I think this attitude is pervasive. It is ignoring the subject.

Comment author: AndrewKemendo 03 October 2009 01:18:10PM -1 points [-]

"Utilons" are a stand-in for "whatever it is you actually value"

Of course - which makes them useless as a metric.

we tend to support decision making based on consequentialist utilitarianism

Since you seem to speak for everyone in this category - how did you come to the conclusion that this is the optimal philosophy?

Thanks for the link.