ChapelierdeCheshire comments on AI-created pseudo-deontology - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (34)
Such AIs would not satisfy the axioms of VNM-rationality, meaning their preferences wouldn't be structured intuitively, meaning... well, I'm not sure what, exactly, but since "intuitively" generally refers to human intuition, I think humanity probably wouldn't like that.
Since human beings are not utility maximizes and intuition is based on comparison to our own reference class experience, I question your assumption that only VNM-rational agents would behave intuitively.
I'm not sure humans aren't utility maximizers. They simply don't maximize utility over worldstates. I do feel, however, that it's plausible humans are utility maximizers over brainstates.
(Also, even if humans aren't utility maximizers, that doesn't mean they will find the behavior other non-utility-maximizing agents intuitive. Humans often find the behavior of other humans extraordinarily unintuitive, for example--and these are identical brain designs we're talking about, here. If we start considering larger regions in mindspace, there's no guarantee that humans would like a non-utility-maximizing AI.)