Larks comments on The Curve of Capability - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
-sigh-
This crap goes on year after year, decade after bloody decade. Did you know the Singularity was supposed to happen in 2000? Then in 2005. Then in 2010. Guess how many Singularitarians went "oh hey, our predictions keep failing, maybe that's evidence our theory isn't actually right after all"? If you guessed none at all, give yourself a brownie point for an inspired guess. It's like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go "well our date was wrong, but that doesn't mean it's not going to happen, of course it is, Real Soon Now." Every time we actually try to do any recursive self-improvement, it fails to do anything like what the AI foom crowd says it should do, but of course, it's never "well, maybe recursive self-improvement isn't all it's cracked up to be," it's always "your faith wasn't strong enough," oops, "you weren't using enough of it," or "that's not the right kind" or some other excuse.
That's what I have to deal with, and when I asked you for a prediction, you gave me the usual crap about oh well you'll see when the Apocalypse comes and we all die, ha ha. And that's the most polite terms I'm willing to put it in.
I've made it clear how my theory can be falsified: demonstrate recursive self-improvement doing something beyond the curve of capability. Doesn't have to be taking over the world, just sustained improvement beyond what my theory says should be possible.
If you're willing to make an actual, sensible prediction of RSI doing something, or some other event (besides the Apocalypse) coming to pass, such that if it fails to do that, you'll agree your theory has been falsified, great. If not, fine, I'll assume your faith is absolute and drop this debate.
If you think that most Singularities will be Unfreindly, the Anthropic Shadow means that their absense from our time-line isn't very strong evidence against their being likely in the future: no matter what proportion of the multiverse sees the light cone paperclipped in 2005, all the observers in 2010 will be in universes that weren't ravaged.
This is true if you think the maximum practical speed of interstellar colonization will be extremely close to (or faster than) the speed of light. (In which case, it doesn't matter whether we are talking Singularity or not, friendly or not, only that colonization suppresses subsequent evolution of intelligent life, which seems like a reasonable hypothesis.)
If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don't Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn't yet reached us.
Of course there is as yet no proof of either hypothesis, but such reasonable estimates as we currently have, suggest the latter.
Nitpick: If the civilization is spreading by SETI attack, observing them could be the first stage of being colonized by them. But I think the discussion may be drifting off-point here. (Edited for spelling.)