You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Squark comments on Quickly passing through the great filter - Less Wrong Discussion

10 Post author: James_Miller 06 July 2014 06:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 09 July 2014 07:04:22PM 2 points [-]

"Do we live in a late filter universe?" is not a meaningful question. The meaningful question is "should we choose strategy A suitable for early filter universes or strategy B suitable for late filter universes?" According to UDT, we should choose the strategy leading to maximum expected utility given all similar players choose it, where the expectation value averages both kind of universes. Naive anthropic reasoning suggests we should assume we are in a late filter universe, since there are much more players there. This, however, is precisely offset by the fact these players have a poor chance of success even when playing B so their contribution to the difference in expected utility between A and B is smaller. Therefore, we should ignore anthropic reasoning and focus on the a priori probability of having an early filter versus a late filter.

Comment author: [deleted] 15 August 2014 06:22:31PM 0 points [-]

The anthropic reasoning in there isn't valid though. Anthropic reasoning can only be used to rule out impossibilities. If a universe were impossible, we wouldn't be in it. However any inference beyond that makes assumptions about prior distirbutions and selection which have no justification. There are many papers (e.g. http://arxiv.org/abs/astro-ph/0610330) showing how anthropic reasoning is really anthropic rationalization when it comes to selecting one model over another.

Comment author: Squark 18 August 2014 06:28:41PM 0 points [-]

Actually, it's possible to always take anthropic considerations into account by using UDT + the Solomonoff prior. I think cosmologists would benefit from learning about it.

Comment author: [deleted] 18 August 2014 10:41:53PM 0 points [-]

That's an empty statement. It is always possible to take anthropic considerations into account by using [insert decision theory] + [insert prior]. Why did you choose that decision theory and more importantly that prior?

We have knowledge about only one universe. A single data point is insufficient to infer any information about universe selection priors.

Comment author: James_Miller 09 July 2014 08:53:27PM 0 points [-]

Thanks for the explanation!