hrishimittal comments on No One Knows Stuff - Less Wrong

7 Post author: talisman 12 May 2009 05:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread. Show more comments above.

Comment author: hrishimittal 12 May 2009 09:16:06AM 0 points [-]

I don't really know the formal definition or theory of expected utility, but it is something which seems to underpin almost everything that is said here on LW or on OB.

Can anyone please point me to a good reference or write a wiki entry?

Are the wikipedia references recommended?

Comment author: conchis 12 May 2009 10:26:12AM *  2 points [-]

The wikipedia reference is a bit patchy. This Introduction to Choice under Risk and Uncertainy is pretty good if you have a bit more time, and can handle the technical parts.

Comment author: hrishimittal 12 May 2009 11:16:16AM 0 points [-]

Thanks conchis.

Comment author: timtyler 12 May 2009 06:33:47PM 0 points [-]

Perhaps check my references here:

http://timtyler.org/expected_utility_maximisers/

Comment author: thomblake 12 May 2009 07:00:14PM 0 points [-]

Thanks! I hadn't heard that definition of utilitarianism before.

Comment author: timtyler 12 May 2009 10:01:05PM 0 points [-]

As I recall, I made this up to suit my own ends :-(

Wikipedia quibbles with me significantly - stressing the idea that utilitarianism is a form of consequentialism:

"Utilitarianism is the idea that the moral worth of an action is determined solely by its contribution to overall perceivable utility: that is, its contribution to happiness or pleasure as summed among an ill-defined group of people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome."

I don't really want "utilitarianism" to refer to a form of consequentialism - thus my crude attempt at hijacking the term :-|

Comment author: thomblake 13 May 2009 02:20:11PM 0 points [-]

I hadn't even considered the possibility that your definition might lead to a 'utilitarianism' that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to 'rule utilitarianism', but more interesting - the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?

Comment author: timtyler 13 May 2009 03:51:34PM *  0 points [-]

I would still be prepared to call an agent "utilitarian" if it operated via maximising expected utility - even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.

Humans are often a bit like this. They "expect" that hoarding calories is a good idea - and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn't make humans less utilitarian in my book - rather they have some bad priors - and they are wired-in ones that are tricky to update.