LongInTheTooth comments on "Outside View!" as Conversation-Halter - Less Wrong

49 Post author: Eliezer_Yudkowsky 24 February 2010 05:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

You are viewing a single comment's thread.

Comment author: LongInTheTooth 24 February 2010 05:27:51PM 0 points [-]

Once again, Bayesian reasoning comes to the rescue. The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.

However a reminder to be careful and objective about the probability one might assign to a new bit of data (Inside view data is not privileged over outside view data! And it might be really bad!) is helpful.

Comment author: Eliezer_Yudkowsky 24 February 2010 06:36:54PM 12 points [-]

The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.

I'd like to be able to say that, but there actually is research showing how human beings get more optimistic about their Christmas shopping estimates as they try to visualize the details of when, where, and how.

Your statement is certainly true of an ideal rational agent, but it may not be carried in human practice.

Comment author: Cyan 24 February 2010 06:59:07PM *  1 point [-]

...updating based on new data (ignore the inside view!)...

...human beings get more optimistic... as they try to visualize the details of when, where, and how.

Are updating based on new data and updating based on introspection equivalent? If not, then LongInTheTooth equivocated by calling ignoring the inside view a failure to update based on new data. But maybe they are equivalent under a non-logical-omniscience view of updating, and it's necessary to factor in meta-information about the quality and reliability of the introspection.

Comment author: LongInTheTooth 24 February 2010 10:57:04PM 1 point [-]

"But maybe they are equivalent under a non-logical-omniscience view of updating, and it's necessary to factor in meta-information about the quality and reliability of the introspection."

Yes, that is what I was thinking in a wishy-washy intuitive way, rather than an explicit and clearly stated way, as you have helpfully provided.

The act of visualizing the future and planning how long a task will take based on guesses about how long the subtasks will take, I would call generating new data which one might use to update a probability of finishing the task on a specific date. (FogBugz Evidence Based Scheduling does exactly this, although with Monte Carlo simulation, rather than Bayesian math)

But research shows that when doing this exercise for homework assignments and Christmas shopping (and, incidentally, software projects), the data is terrible. Good point! Don't lend much weight to this data for those projects.

I see Eliezer saying that sometimes the internally generated data isn't bad after all.

So, applying a Bayesian perspective, the answer is: Be aware of your biases for internally generated data (inside view), and update accordingly.

And generalizing from my own experience, I would say, "Good luck with that!"