You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong Discussion

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 30 January 2011 04:41:58AM 7 points [-]

In a nutshell, Eliezer's metaethics says you should maximize your preferences whatever they may be, or rather, you shouldyou maximize your preferences, but of course you shouldme maximize my preferences. (Note that I said preferences and not utility function.

Eliezer is a bit more aggressive in the use of 'should'. What you are describing as should<matt> Eliezer has declared to be would_want<matt> while 'should' is implicitly would_want<Eliezer>, with no allowance for generic instantiation. That is he is comfortable answering "What should a Paperclip Maximiser do when faced with Newcomb's problem?" with "Rewrite itself to be an FAI".

There have been rather extended (and somewhat critical) discussions in comment threads of Eliezer's slightly idiosyncratic usage of 'should' and related terminology but I can't recall where. I know it was in a thread not directly related to the subject!

Comment author: Matt_Simpson 30 January 2011 08:54:28PM 3 points [-]

You're right about Eliezer's semantics. Count me as one of those who thought his terminology was confusing, which is why I don't use it when I try to describe the theory to anyone else.

Comment author: lessdazed 02 July 2011 05:30:09PM *  0 points [-]

Are you sure? I thought "should" could mean would_want<being with aggregated/weighted [somehow] desires of all humanity>. Note I could follow this by saying "That is he is comfortable answering "What should a Paperclip Maximiser do when faced with Newcomb's problem?" with "Rewrite itself to be an FAI".", but that would be affirming the consequent ;-), i.e. I know he says such a thing, but my and your formulation both plausibly explain it, as far as I know.