# Zachary_Kurtz comments on Too busy to think about life - Less Wrong

67 22 April 2010 03:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

You are viewing a single comment's thread.

Comment author: 23 April 2010 07:37:54PM 2 points [-]

Applying optimal foraging theory to rationality is something we've been discussing at the NYC-LW meetup group for a few months now. I think this is related to this post.

http://en.wikipedia.org/wiki/Optimal_foraging_theory

Comment author: 24 April 2010 02:59:41PM *  2 points [-]

Sounds promising. What kind of rationality did you discuss in relation to OFT -- epistemic or instrumental? Or, in other words, what quantity did you substitute for the energy to be maximized -- improvement of one's map of reality or progress towards one's goals? Did you attempt to quantify these?

Comment author: 26 April 2010 04:38:48PM 4 points [-]

Both really. How much time should we dedicate to making our map fit the territory before we start sacrificing optimality? Spend too long trying to improve epistemic rationality and you begin to sacrifice your ability to get to work on actual goal seeking.

On the other end, if you don't spend long enough to improve your map, you may be inefficiently or ineffectively trying to reach your goals.

We're still thinking of ways to be able to quantify these. Largely it depends on the specific goal and map/territory as well as the person.

Anybody else have some ideas?

Comment author: 07 August 2011 08:15:56AM 1 point [-]

In AI, this is known as the exploration/exploitation problem. You could try Googling "Multi-armed bandit" for an extremely theoretical view.

My biggest recommendation is to do a breadth-first search, using fermi calculations for value-of-information. If people would be interested, I could maybe write a guide on how to do this more concretely?

Comment author: 28 April 2010 04:26:00AM 1 point [-]

First, understand the domain of the problem so you can identify poential downsides. Is this area Black Swan prone? Does this resemble Newcomb's problem at all? What do (I think) the shape of risk is here?

For most things people need to do in daily life, we might just consider the cost of further optimization against cost of remaining ignorant and being wrong as a result of that ignorance. It can ne good to be aware of the biases that Prospect Theory talks about--am I putting off reasonably winning big because I'm so afraid of losing pennies?