This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
To resurrect the Pascal's mugging problem:
This seems like a hack around the problem.
What if we are told there's an infinite number of people, so everybody could affect 3^^^^3 other people (per Hilbert's Hotel)?
What consequences would this prior lead to - assuming that the odds of us making a successful AI are 1/some-very-large-number, because a successful AI could go on to control everything within our light cone and for the rest of history affect the lives of some-very-large-number of beings?
(For that matter, wouldn't this solution have us bite the bullet of the Doomsday argument in general, and assume that we and our creations will expire soon because otherwise, how likely was it that we would just happen to exist near the beginning of the universe/humanity and thus be in a position to affect the yawning eons after us?)