dfranke comments on New Year's Predictions Thread - Less Wrong

18 Post author: MichaelVassar 30 December 2009 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (426)

You are viewing a single comment's thread. Show more comments above.

Comment author: dfranke 01 January 2010 06:34:40PM *  0 points [-]

I'll put down money on the other side of this prediction provided that we can agree on an objective definition of "transhuman intelligence".

Comment author: Unknowns 01 January 2010 07:35:39PM 0 points [-]

My bet with Eliezer can be found at http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/.

I said there at the time, "As for what constitutes the AI, since we don't have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being." Everyone's agreement that it is clearly more intelligent would be the "objective" standard.

In any case, I am risk averse, so I don't really want to bet on the next decade, which according to my prediction would give me a 90% chance of losing the bet. The bet with Eliezer was indefinite, since I already paid; I am simply counting on it happening within our lifetimes.

Comment author: dfranke 01 January 2010 08:26:05PM 2 points [-]

I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I'm willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.

Comment author: LucasSloan 01 January 2010 11:12:45PM 0 points [-]

Would a sped-up upload count as super-intelligent in your opinion?