This is the first in a series of posts I am putting together on a personal blog I just started two days ago as a collection of my musings on astrobiology ("The Great A'Tuin" - sorry, I couldn't help it), and will be reposting here. Much has been written here about the Fermi paradox and the 'great filter'. It seems to me that going back to a somewhat more basic level of astronomy and astrobiology is extremely informative to these questions, and so this is what I will be doing. The bloggery is intended for a slightly more general audience than this site (hence much of the content of the introduction) but I think it will be of interest. Many of the points I will be making are ones I have touched on in previous comments here, but hope to explore in more detail.
This post references my first two posts - an introduction, and a discussion of our apparent position in space and time in the universe. The blog posts may be found at:
http://thegreatatuin.blogspot.com/2015/07/whats-all-this-about.html
http://thegreatatuin.blogspot.com/2015/07/space-and-time.htm
1 is early filter meaning before our current state, #4 would be around or after our current state.
Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.
I'm trying to get you to explain why you think a belief that "AI is a significant risk" would change our credence in any of #1-5, compared to not believing that.
ah I see.
ok, combinations. For each 1 to 5 I'm assuming mutually exclusive because I don't want to mess around with too many scenarios.
For AI risk I'm assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We'd expect nothing visible.
1-low : We'd expect nothing visible.
2-high : This comes down to "how impossible?" impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We'd still expect to see something weird as entire solar systems are engineered.
2-low :We'd expect n... (read more)