sketerpot comments on Open Thread, August 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (676)
I would like feedback on my recent blog post:
http://www.kmeme.com/2010/07/singularity-is-always-steep.html
It's simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the "real" behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.
Instead I now believe in many cases the log plot is closer to "the real thing" or at least how we perceive that thing. For example in the post I talk about computational capacity. I believe the exponential increase is capacity translates into a perceived linear increase in utility. A computer twice as fast is only incrementally more useful, in terms of what applications can be run. This holds true today and will hold true in 2040 or any other year.
Therefore computational utility is incrementally increasing today and will be incrementally increasing in 2040 or any future date. It's not building to some dramatic peak.
None of this says anything against the possibility of a Singularity. If you pass the threshold where machine intelligence is possible, you pass it, whatever the perceived rate of progress at the time.
I agree with your post, especially since I expect to win my bet with Eliezer.
I don't know what this bet is, and I don't see a link anywhere in your post.
http://wiki.lesswrong.com/wiki/Bets_registry
(I am the original Unknown but I had to change my name when we moved from Overcoming Bias to Less Wrong because I don't know how to access the other account.)
Any chance you and Eliezer could set a date on your bet? I'd like to import the 3 open bets to Prediction Book, but I need a specific date. (PB, rightly, doesn't do open-ended predictions.)
eg. perhaps 2100, well after many Singularitarians expect some sort of AI, and also well after both of your actuarial death dates.
If we agreed on that date, what would happen in the event that there was no AI by that time and both of us are still alive? (These conditions are surely very unlikely but there has to be some determinate answer anyway.)
You could either
I like #2 better since I dislike implicit premises and this (while you two are still relatively young and healthy) is as good a time as any to clarify the terms. But #1 follows more the Long Bets formula.
Eliezer and I are probably about equally confident that "there will not be AI by 2100, and both Eliezer and Unknown will still be alive" is incorrect. So it doesn't seem very fair to select either 2 or 3. So option 1 seems better.