I'm still not sold on the idea that an intelligent being would slavishly follow its utility function.
If it's really your utility function, you're not following it "slavishly" - it is just what you want to do.
For AI, there are no questions about the meaning of life then? Just keep on U maximizing?
If "questions about the meaning of life" maximize utility, then yes, there are those. Can you unpack what "questions about the meaning of life" are supposed to be, and why you think they're important? ('meaning of "life"' is fairly easy, and 'meaning of life' seems like a category error).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I've realized that my sibling comment is logically rude, because I've left out some relevant detail. Most relevantly, I tend to self-describe as a virtue ethicist.
I've noticed at least 3 things called 'virtue ethics' in the wild, which are generally mashed together willy-nilly:
There are virtue ethicists who buy into only some of these, but most often folks slip between them without noticing. One fellow I know will often say that #1 being false would not damage virtue ethics, because it's really about #2 and #3 - and yet he goes on arguing in favor of virtue ethics by citing #1.
This is a great framework - very clear! Thanks!