You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

entirelyuseless comments on Help with Bayesian priors - Less Wrong Discussion

4 Post author: WikiLogicOrg 14 August 2016 10:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: entirelyuseless 07 September 2016 02:43:09PM 0 points [-]

You seem to be proposing a simplistic theory of goals, much like the simplistic theory of goals that leads Eliezer to the mistaken conclusion that AI will want to take over the world.

In particular, happiness is not one unified thing that everyone is aiming at, that is the same for them and me. If I admit that I do what I do in order to be happy, then a big part of that happiness would be "knowing the truth," while for them, that would be only a small part, or no part at all (although perhaps "claiming to possess the truth" would be a part of it for them -- but it is really not the same to value claiming to possess the truth, and to value the truth.)

Additionally, using "happiness" as it is typically used, I am in fact less happy on account of valuing the truth more, and there is no guarantee that this will ever be otherwise.