asr comments on On not diversifying charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (73)
You are using preference to mean something other than I thought you were.
I'm not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can't just define away those limits. And I'm not at all convinced that our preferences would converge even given infinite time. That's an assumption, not a theorem.
When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there's no non-paradoxical way to do rankings. (This is basically Arrow's theorem). And I suspect that's the cause for my lack of preference ordering.
Of course. But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.
Money is not a terminal value for most people. I suspect you want money because of the things it can buy you, not as a value in itself. I think health is also instrumental. We value health because illness is unpleasant, might lead to death and generally interferes with taking actions to optimize our values. The unpleasant sensations of illness might well be commensurable with the pleasant sensations of taste. For example you would probably pass up a gourmet meal if eating it implies getting cancer.
However you can not know what decisions you would make if you had infinite time and information. You can make guesses based on your ideas of convergence, but that's about it.
A Bayesian never "knows" anything. She can only compute probabilities and expectation values.
Can she compute probabilities and expectation values with respect to decisions she would make if she had infinite time and information?
I think it should be possible to compute probabilities and expectation values of absolutely anything. However to put it on a sound mathematical basis we need a theory of logical uncertainty.
On the basis of what do you think so? And what entity will be doing the computing?
I think so because conceptually a Bayesian expectation value is your "best effort" to estimate something. Since you can always do your "best effort" you can always compute the expectation value. Of course, for this to fully make sense we must take computing resource limits into account. So we need a theory of probability with limited computing resources aka a theory of logical uncertainty.
Not quite. Conceptually a Bayesian expectation is your attempt to rationally quantify your beliefs which may or may not involve best efforts. That requires these beliefs to exist. I don't see why it isn't possible to have no beliefs with regard to some topic.
That's not very meaningful. You can always output some number, but so what? If you have no information you have no information and your number is going to be bogus.
If you don't believe that the process of thought asymptotically converges to some point called "truth" (at least approximately), what does it mean to have a correct answer to any question?
Meta-remark: Whoever is downvoting all of my comments in this thread, do you really think I'm not arguing in good faith? Or are you downvoting just because you disagree? If it's the latter, do you think it's good practice or you just haven't given it thought?