You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gotdistractedbythe comments on Why Politics are Important to Less Wrong... - Less Wrong Discussion

6 Post author: OrphanWilde 21 February 2013 04:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (96)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 21 February 2013 09:41:25PM 2 points [-]

Deciding what we value isn't relevant to friendliness? Could you explain that to me?

Comment author: Larks 22 February 2013 10:18:10AM 2 points [-]

The whole point of CEV is that we give the AI an algorithm for educing our values, and let it run. At no point do we try to work them out ourselves.

Comment author: [deleted] 25 February 2013 10:00:09PM *  0 points [-]

I mentally responded to you and forgot to, you know, actually respond.

I'm a bit confused by this and since it was upvoted I'm less sure I get CEV....

It might clear things up to point out that I'm making a distinction between goals or preferences vs. values. CEV could be summarized as "fulfill our ideal rather than actual preferences", yeah? As in, we could be empirically wrong about what would maximize the things we care about, since we can't really be wrong about what to care about. So I imagine the AI needing to be programmed with our values- the meta wants that motivate our current preferences- and it would extrapolate from them to come up with better preferences, or at least it seems that way to me. Or does the AI figure that out too somehow? If so, what does an algorithm that figures out our preferences and our values contain?

Comment author: Larks 26 February 2013 10:43:28AM 2 points [-]

Ha, yes, I often do that.

The motivation behind CEV also includes the idea we might be wrong about what we care about. Instead, you give your FAI an algorithm for

  • Locating people
  • Working out what they care about
  • Working out what they would care about if they knew more, etc.
  • Combining these preferences

I'm not sure what distinction you're trying to draw between values and preferences (perhaps a moral vs non-moral one?), but I don't think it's relevant to CEV as currently envisioned.

Comment author: JoshuaFox 22 February 2013 11:48:45AM 1 point [-]

Actually, when I said "most" in "most of these are not relevant to designing a future Friendly AI," I was thinking that values are the exception, they are relevant.

Comment author: [deleted] 22 February 2013 08:52:51PM 0 points [-]

Oh. Then yeah ok I think I agree.