HungryHobo comments on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time - Less Wrong

42 Post author: CellBioGuy 26 July 2015 07:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: HungryHobo 30 July 2015 10:11:30AM *  0 points [-]

ah I see.

ok, combinations. For each 1 to 5 I'm assuming mutually exclusive because I don't want to mess around with too many scenarios.

For AI risk I'm assuming a paper clipper as a reasonable example of a doomsday AI scenario.

1-high : We'd expect nothing visible.

1-low : We'd expect nothing visible.

2-high : This comes down to "how impossible?" impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We'd still expect to see something weird as entire solar systems are engineered.

2-low :We'd expect nothing visible.

3-high :We'd expect nothing visible.

3-low :We'd expect nothing visible.

4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.

4-low : We'd expect nothing visible.

5-high : We'd still expect to see the universe being converted into paperclips by someone who screwed up.

5-low : We'd expect nothing visible.

Ok so fair point made, there's a couple more options implied.

a:early filter,

b:low AI risk,

c:wizards already in charge who enforce low AI risk.

d:AI risk being far less important than some other really horrible soon to come risk.