wedrifid comments on A Less Wrong singularity article? - Less Wrong

28 Post author: Kaj_Sotala 17 November 2009 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (210)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 18 November 2009 05:55:58AM 1 point [-]

Why should I use the word "should" to describe this, when "will" serves exactly as well?

'Will' does not serve exactly as well when considering agents with limited optimisation power (that is, any actual agent). Considering, for example, a Paperclip Maximiser that happens to be less intelligent than I am. I may be able to predict that Clippy will colonize Mars before he invades earth but also be quite sure that more paperclips would be formed if Clippy invaded Earth first. In this case I will likely want a word that means "would better serve to maximise the agent's expected utility even if the agent does not end up doing it".

One option is to take 'should' and make it the generic 'should<Agent>'. I'm not saying you should use 'should' (implicitly, 'should<Clippy>') to describe the action that Clippy would take if he had sufficient optimisation power. But I am saying that 'will' does not serve exactly as well.

Comment author: Eliezer_Yudkowsky 18 November 2009 07:15:26AM 1 point [-]

I use "would-want" to indicate extrapolation. I.e., A wants X but would-want Y. This helps to indicate the implicit sensitivity to the exact extrapolation method, and that A does not actually represent a desire for Y at the current moment, etc. Similarly, A does X but would-do Y, A chooses X but would-choose Y, etc.

Comment author: timtyler 18 November 2009 12:24:42PM -1 points [-]

"Should" is a standard word for indicating moral obligation - it seems only sensible to use it in the context of other moral systems.