Anthropic principles agree on bigger future filters
I would like to draw attention to the honours thesis of Katja Grace (Meteuphoric).
Link: meteuphoric.wordpress.com/2010/11/02/anthropic-principles-agree-on-bigger-future-filters/
PDF: dl.dropbox.com/u/6355797/Anthropic%20Reasoning%20in%20the%20Great%20Filter.pdf
My main point was that two popular anthropic reasoning principles, the Self Indication Assumption (SIA) and the Self Sampling Assumption (SSA), as well as Full Non-indexical Conditioning (FNC) basically agree that future filter steps will be larger than we otherwise think, including the many future filter steps that are existential risks.
What do you think? (Consider commenting over on her blog, Robin Hanson is also there.)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (5)
SIA/FNC doesn't just prove the universe is full of alien civilizations, it proves each solar system has a long sequence of such civilizations, which is totally evidence for dinos on the moon.
Well, I'm not sure what to think as I'm not yet at the point that would allow me to read the paper. Neither regarding probability theory nor the concepts in question. But the conclusion seemed important enough for me to ask, even more so as Robin Hanson seems to agree and Katja Grace being a SIAI visiting fellow. I know that doesn't mean much, but thought it might after all be not totally bogus then (at least for use to show how not to use probability).
That dinos on the moon link is awesome. That idea has been featured in one Star Trek Voyager episode too :-)
The novel Toolmaker Koan by John McLoughlin doesn't just feature dinosaurs on the moon, it features a dinosaur generation-ship held in the outer solar system by an alien machine intelligence. A ripping good yarn, and a meditation on the Fermi Paradox.
I'm not saying it's totally bogus. I do have a ton of reservations that I don't have time to write up.
Light cone eating AI explosions are not filters