You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Anthropic principles agree on bigger future filters

2 Post author: XiXiDu 03 November 2010 04:20PM

I would like to draw attention to the honours thesis of Katja Grace (Meteuphoric).

Link: meteuphoric.wordpress.com/2010/11/02/anthropic-principles-agree-on-bigger-future-filters/
PDF: dl.dropbox.com/u/6355797/Anthropic%20Reasoning%20in%20the%20Great%20Filter.pdf

My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC)  basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.

What do you think? (Consider commenting over on her blog, Robin Hanson is also there.)

 

Comments (5)

Comment author: steven0461 03 November 2010 08:38:17PM 1 point [-]

SIA/FNC doesn't just prove the universe is full of alien civilizations, it proves each solar system has a long sequence of such civilizations, which is totally evidence for dinos on the moon.

Comment author: XiXiDu 04 November 2010 09:21:04AM *  1 point [-]

Well, I'm not sure what to think as I'm not yet at the point that would allow me to read the paper. Neither regarding probability theory nor the concepts in question. But the conclusion seemed important enough for me to ask, even more so as Robin Hanson seems to agree and Katja Grace being a SIAI visiting fellow. I know that doesn't mean much, but thought it might after all be not totally bogus then (at least for use to show how not to use probability).

That dinos on the moon link is awesome. That idea has been featured in one Star Trek Voyager episode too :-)

Comment author: jfm 04 November 2010 02:00:31PM 0 points [-]

The novel Toolmaker Koan by John McLoughlin doesn't just feature dinosaurs on the moon, it features a dinosaur generation-ship held in the outer solar system by an alien machine intelligence. A ripping good yarn, and a meditation on the Fermi Paradox.

Comment author: steven0461 04 November 2010 08:51:56PM -1 points [-]

I'm not saying it's totally bogus. I do have a ton of reservations that I don't have time to write up.

Comment author: XiXiDu 04 November 2010 04:45:34PM 0 points [-]

According to SIA averting these filter existential risks should be prioritized more highly relative to averting non-filter existential risks such as those in this post. So for instance AI is less of a concern relative to other existential risks than otherwise estimated.

Light cone eating AI explosions are not filters