Vladimir_Nesov comments on The ideas you're not ready to post - Less Wrong

24 Post author: JulianMorrison 19 April 2009 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (253)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 23 April 2009 09:05:28PM *  0 points [-]

To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn't require interacting with anyone, this process may theoretically be wholly personal, you against math.

Comment author: cousin_it 23 April 2009 09:19:34PM *  0 points [-]

Yes, an agent with a well-defined utility function "should" act to maximize it with a rigorous decision theory. Well, I'm glad I'm not such an agent. I'm very glad my life isn't governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

Comment author: Vladimir_Nesov 23 April 2009 09:38:39PM *  0 points [-]

Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!

No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!

Comment author: ArisKatsaris 14 April 2011 03:04:20PM 1 point [-]

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

What if some people don't prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?

Comment author: JGWeissman 15 April 2011 03:24:32AM 3 points [-]

Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn't have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.

Comment author: wedrifid 14 April 2011 04:57:14PM 2 points [-]

What if our mind has embedded in its utility function the desire not to be more accurately aware of it?

Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.

Comment author: TheOtherDave 14 April 2011 07:42:54PM 0 points [-]

OTOH, if it also comes packaged with an inability to notice public assertions that they prefer not to be aware, then you're safe.

Comment author: wedrifid 15 April 2011 03:08:16AM 0 points [-]

If only... :P

Comment author: Vladimir_Nesov 14 April 2011 03:36:08PM 1 point [-]

Then how would you ever know? Rational ignorance is really hard.