Eliezer_Yudkowsky comments on Open Thread: February 2010 - Less Wrong

1 Post author: wedrifid 01 February 2010 06:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (738)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 01 February 2010 10:35:48AM 4 points [-]

On the same problem? I might attach some extra terms and conditions this time around, like "offer void (stakes will be returned) if the AI has the power and desire to use us for paperclips but our lives are ransomed by some other entity with the power to offer the AI more paperclips than it could produce by consuming us", "offer void if the explanation of the Fermi Paradox is a previously existing superintelligence which shuts down any new superintelligences produced", and "offer void if the AI consumes our physical bodies but we continue via the sort of weird anthropic scenario introduced in The Finale of the Ultimate Meta Mega Crossover." With those provisos, my probability drops off the bottom of the chart. I'm still not sure about the bet, though, because I want to keep my total of outstanding bets to something I can honor if they all simultaneously go wrong (no matter how surprising that would be to me), and this would use up $10,000 of that, even if it's on a sure thing - I might be able to get a better price on some other sure thing.

Comment author: Unknowns 01 February 2010 12:06:19PM 0 points [-]

If we survive by an anthropic situation (it's hard to see how that could preserve several persons together, but just in case), then you win the bet, since that would more like a second world than a continuation of this one.

If the AI is shut down before it has had a chance to operate, the bet wouldn't have been settled yet, so you wouldn't have to pay anything.

Anyway, I'm still going to win.