Wei_Dai comments on Inferring Our Desires - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
I think you would have a strong point if the arguments that really move us forms a coherent ethical system, but what if when people find out what they're really moved by, it turns out not to be anything coherent, but just a semi-random set of "considerations" that happen to move a hodgepodge of neural circuits?
That certainly seems to be to some extent true of real humans, but the point is that even if I'm to some extent a random hodgepodge, this does not obviously create in me an impulse to consult a brain scan readout or a table of my counterfactual behaviors and then follow those at the expense of whatever my other semi-random considerations are causing me to feel is right.
Sure, unless one of the semi-random considerations that moves you is "Crap, my EV is not coherent. Well I don't want to lay down and wait to die, so let's just make an AI that will serve my current desires." :)
Incoherent considerations aren't all that bad. Even if someone prefers A to B, B to C, and C to A, they'll just spend a lot of time switching rather than waiting to die. I guess that people probably prefer changing their considerations in general, so your example of a semi-random consideration is sufficient but not at all unique or uncommon.