gwern comments on Some potential dangers of rationality training - Less Wrong

18 Post author: lukeprog 21 January 2012 04:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: gwern 21 January 2012 05:44:54AM *  2 points [-]

Perhaps the sunk cost fallacy is useful because without it you're prone to switch projects as soon as a higher-value project comes along, leaving an ever-growing heap of abandoned projects behind you.

There's actually some literature on justifying the sunk cost fallacy, pointing to the foregone learning of switching. (I should finish my essay on the topic; one of my examples was going to be 'imagine a simple AI which avoids sunk cost fallacy by constantly switching tasks...')

EDIT: you can see my essay at http://www.gwern.net/Sunk%20cost

Comment author: Solvent 21 January 2012 06:18:09AM 1 point [-]

'imagine a simple AI which avoids sunk cost fallacy by constantly switching tasks...')

Why would an AI have the sunk cost fallacy at all? Aren't you anthropomorphizing?

Comment author: Grognor 21 January 2012 06:38:19AM 0 points [-]

No, his example points out what an AI that specifically does not have the sunk cost fallacy is like.

Comment author: Solvent 21 January 2012 06:49:24AM 3 points [-]

The thing is, an AI wouldn't need to feel a sunk cost effect. It would act optimally simply by maximising expected utility.

For example, say that I'm decide to work on Task A, which will take me five hours and will earn me $200. After two hours of work, I discover Task B which will award me $300 after five hours. At this point, I can behave like a human, and feel bored and annoyed, but the sunk cost effect will make me continue, maybe. Or I can calculate expected return: I'll get $200 after 3 hours of work on Task A, which is %67 per hour, wheras I'll get $300 after 5 hours on Task B, which is $60 per hour. So the rational thing to do is to avoid switching.

The sunk cost fallacy reflects that after putting work into something, the wage for continuing work decreases. An AI wouldn't need that to act optimally.

Comment author: gwern 21 January 2012 03:04:39PM 2 points [-]

One of my points is that you bury a great deal of hidden complexity and intelligence in 'simply maximize expected utility'; it is true sunk cost is a fallacy in many simple fully-specified models and any simple AI can be rescued just by saying 'give it a longer horizon! more computing power! more data!', but do these simple models correspond to the real world?

(See also the question of whether exponential discounting rather than hyperbolic discounting is appropriate, if returns follow various random walks rather than remain constant in each time period.)

Comment author: [deleted] 21 January 2012 07:33:49AM 0 points [-]

You neglected the part where the AI may stand to learn something from the task, which may have a large expected value relative to the tasks themselves.

Comment author: Solvent 22 January 2012 02:44:59AM 1 point [-]

Yeah, but that comes under expected utility.

Comment author: [deleted] 22 January 2012 07:09:29PM 0 points [-]

What else are you optimising besides utility? Doing the calculations with the money can tell you the expected money value of the tasks, but unless your utility function is U=$$$, you need to take other things into account.

Comment author: dbaupp 25 January 2012 12:38:56PM 0 points [-]

Off-topic, but...

I like how you have the sections on the side of your pages. Looks good (and works reasonably well)!

Comment author: gwern 26 January 2012 03:16:29PM 0 points [-]

Thanks. It was a distressing amount of work, but I hoped it'd make up for it by keeping readers oriented.

Comment author: dbaupp 30 January 2012 03:58:28AM 0 points [-]

Yep, it seems to. :)

(Bug report: the sausages overlap the comments (e.g. here), maybe just a margin-right declaration in the CSS for that div?)

Comment author: gwern 30 January 2012 04:17:50PM 0 points [-]

I don't see it. When I halve my screen, the max-width declaration kicks in and the sausages aren't visible at all.

Comment author: dbaupp 31 January 2012 12:05:56PM 0 points [-]

Hmm, peculiar...

Here is what I see: 1 2 (the last word of the comment is cut off).

Comment author: gwern 31 January 2012 04:38:05PM 0 points [-]

First image link is broken; I see what you mean in the second. Could it be your browser doesn't accept CSS3 at all? Do the sausages ever disappear as you keep narrowing the window width?

Comment author: dbaupp 31 January 2012 10:04:20PM 0 points [-]

(Not sure what happened to that link, sorry. It didn't show anything particularly different to the other one though)

Those screenshots are Firefox nightly (so bleeding edge CSS3 support) but chrome stable shows a similar thing (both on Linux).

Yes, the sausages do disappear if the window is thin enough.