gwern comments on Open Thread: November 2009 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (539)
To resurrect the Pascal's mugging problem:
This seems like a hack around the problem.
What if we are told there's an infinite number of people, so everybody could affect 3^^^^3 other people (per Hilbert's Hotel)?
What consequences would this prior lead to - assuming that the odds of us making a successful AI are 1/some-very-large-number, because a successful AI could go on to control everything within our light cone and for the rest of history affect the lives of some-very-large-number of beings?
(For that matter, wouldn't this solution have us bite the bullet of the Doomsday argument in general, and assume that we and our creations will expire soon because otherwise, how likely was it that we would just happen to exist near the beginning of the universe/humanity and thus be in a position to affect the yawning eons after us?)