You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

muflax comments on Why No Wireheading? - Less Wrong Discussion

16 [deleted] 18 June 2011 11:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 19 June 2011 12:47:31PM 2 points [-]

Are you looking for a definition?

No, I'm trying to understand the process others use to make their claims about what they value (besides direct experiences). I can't reproduce it, so it feels like they are confabulating, but I don't assume that's the most likely answer here.

For our purposes, we can get some pretty decent bayesian evidence on what our values are by simply asking "which future scenario do I want to steer the world towards?" Is that going to give us perfect information on exactly what we value? No. But is it a pretty good start? Yes.

That seems horribly broken. There are tons of biases that make asking such questions essentially meaningless. Looking at anticipated and real rewards and punishments can easily be done and fits into simple models that actually predict people's behaviors. Asking complex question leads to stuff like the Trolley problem which is notoriously unreliable and useless with regards to figuring out why we prefer some options to others.

It seems to me that assuming complex values requires cognitive algorithms that are much more expensive than anything evolution might build and don't easily fit actually revealed preferences. Their only strength seems to be that they would match some thoughts that come up while contemplating decisions (and not even non-contradictory ones). Isn't that privileging a very complex hypothesis?