Kaj_Sotala comments on Effective Altruism Through Advertising Vegetarianism? - Less Wrong

20 Post author: peter_hurford 12 June 2013 06:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (551)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 26 June 2013 06:34:26PM *  0 points [-]

I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing "what we want" in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.

Comment author: Vladimir_Nesov 06 July 2013 02:07:32PM *  1 point [-]

I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing "what we want" in order to have any way of ensuring that an AI will further the things we want.

The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that "implicit idealized value" is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.)

I do not understand why the concept would be in relevant to our personal lives, however.

If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can't get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).

Comment author: Kaj_Sotala 09 July 2013 02:31:16PM 0 points [-]

what is normatively the right thing to do (given the resources available)

What does this mean? It sounds like you're talking about some kind of objective morality?