Unknowns comments on New Year's Predictions Thread - Less Wrong

18 Post author: MichaelVassar 30 December 2009 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (426)

You are viewing a single comment's thread.

Comment author: Unknowns 01 January 2010 08:17:04AM 3 points [-]

I predict a 10% chance that I win my bet with Eliezer in the next decade (the one about a transhuman intelligence being created not by Eliezer, not being deliberately created for Friendliness, and not destroying the world.)

Comment author: Baughn 01 January 2010 12:07:02PM 3 points [-]

I'll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.

I kinda hope you're right, but I just don't see how.

Comment author: Unknowns 01 January 2010 01:26:30PM *  0 points [-]

This prediction is technically consistent with my prediction (although this doesn't mean that I don't disagree with it anyway.)

Comment author: Baughn 02 January 2010 05:30:53PM 0 points [-]

In other words, one of us did not specify the prediction correctly.

I don't think it's me. I deliberately didn't say it'd destroy the world. Would it be correct to modify yours to say "..and not making the world a worse place"?

Comment author: Unknowns 02 January 2010 07:18:16PM 2 points [-]

No. If you look at the original bet with Eliezer, he was betting that on those conditions, the AI would literally destroy the world. In other words, if both of us are still around, and I'm capable of claiming the money, I win the bet, even if the world is worse off.

Comment author: Eliezer_Yudkowsky 02 January 2010 08:38:23PM 4 points [-]

Yup. If he lives to collect, he collects.

Comment author: Technologos 02 January 2010 05:35:15PM 0 points [-]

one of us did not specify the prediction correctly

Assuming that there is, in fact, a correct way to specify the predictions. It's possible that you weren't actually disagreeing and that you both assign substantial probability to (world is made worse off but not destroyed | non-FAI is created) while still having a low probability for (non-FAI is created in the next decade).

Comment author: orthonormal 02 January 2011 04:32:00PM 0 points [-]

Considering that the bet includes "not destroying the world", the only fair way to do this type of bet (for money) is for you to give the other party $X now, and for them to give you $Y later if you turn out to be correct.

Comment author: Unknowns 04 January 2011 08:04:01AM 1 point [-]

That's exactly what happened; I gave Eliezer $10, and he will pay me $1000 when I win the bet.

Comment author: dfranke 01 January 2010 06:34:40PM *  0 points [-]

I'll put down money on the other side of this prediction provided that we can agree on an objective definition of "transhuman intelligence".

Comment author: Unknowns 01 January 2010 07:35:39PM 0 points [-]

My bet with Eliezer can be found at http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/.

I said there at the time, "As for what constitutes the AI, since we don't have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being." Everyone's agreement that it is clearly more intelligent would be the "objective" standard.

In any case, I am risk averse, so I don't really want to bet on the next decade, which according to my prediction would give me a 90% chance of losing the bet. The bet with Eliezer was indefinite, since I already paid; I am simply counting on it happening within our lifetimes.

Comment author: dfranke 01 January 2010 08:26:05PM 2 points [-]

I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I'm willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.

Comment author: LucasSloan 01 January 2010 11:12:45PM 0 points [-]

Would a sped-up upload count as super-intelligent in your opinion?