You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Document comments on Don't plan for the future - Less Wrong Discussion

1 Post author: PhilGoetz 23 January 2011 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Document 24 January 2011 09:46:46AM 1 point [-]

Doesn't this depend on the likelihood/prevalence of intelligent life? For all we know, we might be the only sentient species out there.

For the record, that's Michael Anissimov's position (see in particular the last paragraph).

Also, even if there are a lot o unfriendly AIs out there, a friendly one would still vastly improve our fate, whether by fighting off the unfriendly ones, or reaching an agreement with them to mutual benefit, or rescue simulations.

So far as I understand "rescue simulations" in this context, I'd classify them as a particular detail of "a short happy life".

The greater age of the alien ones might give them a massive advantage, but that depends on whether FOOM will ever run into diminishing returns. If it does, then the difference between, say, a 500,000 year old AI and a 2 million year old AI may not be much.

I wouldn't expect there to ever be diminishing returns from acquiring more matter and energy.