CronoDAS comments on Jews and Nazis: a version of dust specks vs torture - Less Wrong

16 Post author: shminux 07 September 2012 08:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread.

Comment author: Wrongnesslessness 08 September 2012 04:35:18PM *  0 points [-]

I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.

Edit: removed a bad example of qualia comparison.

Comment author: Incorrect 08 September 2012 04:38:17PM 1 point [-]

They aren't adding qualia, they are adding the utility they associate with qualia.

Comment author: Wrongnesslessness 08 September 2012 05:13:01PM 0 points [-]

It is not a trivial task to define a utility function that could compare such incomparable qualia.

Wikipedia:

However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.

Has it been shown that this is not the case for dust specks and torture?

Comment author: benelliott 08 September 2012 07:05:31PM 3 points [-]

In the real world, if you had lexicographic preferences you effectively wouldn't care about the bottom level at all. You would always reject a chance to optimise for it, instead chasing the tiniest epsilon chance of affecting the top level. Lexicographic preferences are sometimes useful in abstract mathematical contexts where they can clean up technicalities, but would be meaningless in the fuzzy, messy actual world where there's always a chance of affecting something.

Comment author: Wrongnesslessness 09 September 2012 05:24:21AM 0 points [-]

I've always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.

I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract "quantity of happiness" that produces such preferences, those are qualitatively different life experiences.

Since I do not really know how to optimize for any of this, I'm not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I'm given a choice, it's quite clear what my choice will be.

Comment author: benelliott 09 September 2012 03:44:48PM *  0 points [-]

I don't won't to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences.

Since I do not really know how to optimize for any of this

What do you mean you don't know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren't as effective as they could be they almost certainly don't have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all.

You've also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not "do I prefer an FAI to any number of true friends" but "do I prefer a single true friend to any chance of an FAI, however small", in which case the answer, for me at least, seems to be no.

Comment author: TheOtherDave 08 September 2012 05:48:55PM 1 point [-]

I'm not sure how one could show such a thing in a way that can plausibly be applied to the Vast scale differences posited in the DSvT thought experiment.

When I try to come up with real-world examples of lexicographic preferences, it's pretty clear to me that I'm rounding... that is, X is so much more important than Y that I can in effect neglect Y in any decision that involves a difference in X, no matter how much Y there is relative to X, for any values of X and Y worth considering.

But if someone seriously invites me to consider ludicrous values of Y (e.g., 3^^^3 dust specks), that strategy is no longer useful.

Comment author: Wrongnesslessness 09 September 2012 05:36:45AM 0 points [-]

I'm quite sure I'm not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.

It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.

Comment author: TheOtherDave 09 September 2012 05:44:06AM 0 points [-]

It seems our introspective accounts of our mental processes are qualitatively different, then.

I'm willing to take your word for it that your experience of long unbearable torture cannot be "quantified" in terms of minor discomforts. If you wish to argue that mine can't either, I'm willing to listen.