Wei_Dai comments on Late Great Filter Is Not Bad News - Less Wrong

14 Post author: Wei_Dai 04 April 2010 04:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 04 April 2010 04:26:16PM *  13 points [-]

It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.

Let's take this further: is there any reason, besides our obsession with subjective anticipation, to discuss whether a late great filter is 'good' or 'bad' news, over and above policy implications? Why would an idealized agent evaluate the utility of counterfactuals it knows it can't realize?

Comment author: Wei_Dai 05 April 2010 06:40:10AM 8 points [-]

That is a good question, and one that I should have asked and tried to answer before I wrote this post. Why do we divide possible news into "good" and "bad", and "hope" for good news? Does that serve some useful cognitive function, and if so, how?

Without having good answers to these questions, my claim that a late great filter should not be considered bad news may just reflect confusion about the purpose of calling something "bad news".

Comment author: cousin_it 21 December 2011 06:51:26PM *  1 point [-]

About the cognitive function of "hope": it makes evolutionary sense to become all active and bothered when a big pile of utility hinges on a single uncertain event in the near future, because that makes you frantically try to influence that event. If you don't know how to influence it (as in the case of a lottery), oh well, evolution doesn't care.

Comment author: TheOtherDave 21 December 2011 07:16:55PM *  1 point [-]

Evolution might care. That is, systems that expend a lot of attention on systems they can't influence might do worse than systems that instead focus their attention on systems they can influence. But yes, either there weren't any of the second kind of system around to compete with our ancestors, or there were and they lost out for some other reason, or there were and it turns out that it's a bad design for our ancestral environment.