You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

dxu comments on AI-created pseudo-deontology - Less Wrong Discussion

6 Post author: Stuart_Armstrong 12 February 2015 09:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: dxu 13 February 2015 11:12:22PM 0 points [-]

Stuart, have you looked at AIs that don't have utility functions?

Such AIs would not satisfy the axioms of VNM-rationality, meaning their preferences wouldn't be structured intuitively, meaning... well, I'm not sure what, exactly, but since "intuitively" generally refers to human intuition, I think humanity probably wouldn't like that.

Comment author: [deleted] 13 February 2015 11:18:33PM *  4 points [-]

Since human beings are not utility maximizes and intuition is based on comparison to our own reference class experience, I question your assumption that only VNM-rational agents would behave intuitively.

Comment author: dxu 15 February 2015 07:06:04PM *  0 points [-]

I'm not sure humans aren't utility maximizers. They simply don't maximize utility over worldstates. I do feel, however, that it's plausible humans are utility maximizers over brainstates.

(Also, even if humans aren't utility maximizers, that doesn't mean they will find the behavior other non-utility-maximizing agents intuitive. Humans often find the behavior of other humans extraordinarily unintuitive, for example--and these are identical brain designs we're talking about, here. If we start considering larger regions in mindspace, there's no guarantee that humans would like a non-utility-maximizing AI.)