entirelyuseless comments on Help with Bayesian priors - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
You seem to be proposing a simplistic theory of goals, much like the simplistic theory of goals that leads Eliezer to the mistaken conclusion that AI will want to take over the world.
In particular, happiness is not one unified thing that everyone is aiming at, that is the same for them and me. If I admit that I do what I do in order to be happy, then a big part of that happiness would be "knowing the truth," while for them, that would be only a small part, or no part at all (although perhaps "claiming to possess the truth" would be a part of it for them -- but it is really not the same to value claiming to possess the truth, and to value the truth.)
Additionally, using "happiness" as it is typically used, I am in fact less happy on account of valuing the truth more, and there is no guarantee that this will ever be otherwise.