jwdink comments on The Trolley Problem in popular culture: Torchwood Series 3 - Less Wrong

16 Post author: botogol 27 July 2009 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: jwdink 29 July 2009 09:19:01PM 0 points [-]

Instrumental rationality: achieving your values.  Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about.  The art of choosing actions that steer the future toward outcomes ranked higher in your preferences.  On LW we sometimes refer to this as "winning".

Couldn't these people care about not sacrificing autonomy, and therefore this would be a value that they're successfully fulfilling?

Comment author: Vladimir_Nesov 29 July 2009 09:27:48PM 0 points [-]

Yes they could care about either outcome. The question is whether they did, whether their true hidden preferences said that a given outcome is preferable.

Comment author: jwdink 29 July 2009 10:25:48PM 0 points [-]

What would be an example of a hidden preference? The post to which you linked didn't explicitly mention that concept at all.

Comment author: Vladimir_Nesov 29 July 2009 10:37:27PM *  0 points [-]

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Comment author: jwdink 30 July 2009 07:00:07PM 0 points [-]

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Okay... so again, I'll ask... why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?

Comment author: orthonormal 30 July 2009 07:37:58PM *  1 point [-]

I understand your frustration, since we don't seem to be saying much to support our claims here. We've discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.

However, there's a lot of material that's already been said elsewhere, so I hope you'll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.

Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The "Intuitions" Behind "Utilitarianism". Searching LW for keywords like "specks" or "utilitarian" should bring up more recent posts as well, but these three sum up more or less what I'd say in response to your question.

(There's a whole metaethics sequence later on (see the whole list of Eliezer's posts from Overcoming Bias), but that's less germane to your immediate question.)

Comment author: jwdink 30 July 2009 09:26:32PM 0 points [-]

Oh, it's no problem if you point me elsewhere. I should've specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I'll check them out.

Comment author: pjeby 30 July 2009 05:45:35AM 0 points [-]

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

It's especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.

That being said, it is certainly possible to map a subset of one's preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One's preferences are not necessarily opaque to reflection; they're mostly just nonobvious.