Baughn comments on New Year's Predictions Thread - Less Wrong

18 Post author: MichaelVassar 30 December 2009 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (426)

You are viewing a single comment's thread. Show more comments above.

Comment author: Baughn 01 January 2010 12:07:02PM 3 points [-]

I'll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.

I kinda hope you're right, but I just don't see how.

Comment author: Unknowns 01 January 2010 01:26:30PM *  0 points [-]

This prediction is technically consistent with my prediction (although this doesn't mean that I don't disagree with it anyway.)

Comment author: Baughn 02 January 2010 05:30:53PM 0 points [-]

In other words, one of us did not specify the prediction correctly.

I don't think it's me. I deliberately didn't say it'd destroy the world. Would it be correct to modify yours to say "..and not making the world a worse place"?

Comment author: Unknowns 02 January 2010 07:18:16PM 2 points [-]

No. If you look at the original bet with Eliezer, he was betting that on those conditions, the AI would literally destroy the world. In other words, if both of us are still around, and I'm capable of claiming the money, I win the bet, even if the world is worse off.

Comment author: Eliezer_Yudkowsky 02 January 2010 08:38:23PM 4 points [-]

Yup. If he lives to collect, he collects.

Comment author: Technologos 02 January 2010 05:35:15PM 0 points [-]

one of us did not specify the prediction correctly

Assuming that there is, in fact, a correct way to specify the predictions. It's possible that you weren't actually disagreeing and that you both assign substantial probability to (world is made worse off but not destroyed | non-FAI is created) while still having a low probability for (non-FAI is created in the next decade).