PhilGoetz comments on Anticipating critical transitions - Less Wrong

17 Post author: PhilGoetz 09 June 2013 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 10 June 2013 07:21:11PM *  1 point [-]

Both parts are about the hazards of trusting results from computer simulations. Putting the two parts together shows the connection between divergent series and tail risk. When a series inside the computation changes from convergent to divergent, you then have tail risk, and the simulation's trustworthiness degrades suddenly, yet in a way that may be difficult to notice.

If you are a rational agent, you work with averages, because you want to maximize expected utility. In the kinds of simulations I mentioned, it's especially important to look at the average rather than the mean, because most of the harm in the real world (from economic busts, wars, climate change, earthquakes, superintelligences) comes from the outliers. In some cases, the median behavior is negligible. The average is a well-defined way of getting at that, while "probability of a catastrophically large change" is not capable of being defined.

Comment author: Douglas_Knight 10 June 2013 07:46:01PM *  4 points [-]

If you are a rational agent, you work with averages, because you want to maximize expected utility.

Caring about expected utility doesn't mean you should care about expected anything else. Don't take averages early. In particular, the expected ratio of girls to boy is not the ratio of expected boys to expected girls. Similarly, "expected rise in water level per year" is irrelevant because utility is not linear in sea level.

Comment author: PhilGoetz 17 June 2013 07:17:46PM 1 point [-]

True. Computing average utility is what's important. But just using the median as you suggested doesn't let you compute average utility.