wedrifid comments on The Curve of Capability - Less Wrong

18 Post author: rwallace 04 November 2010 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 04 November 2010 11:05:04PM *  7 points [-]

I'll also ask, assuming I'm right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?

It would be better if you waited until you had made somewhat of a solid argument before you resorted to that appeal. Even Robin's "Trust me, I'm an Economist!" is more persuasive.

The Bottom Line is one of the earliest posts in Eliezer's own rationality sequences and describes approximately this objection. You'll note that he added an Addendum:

This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don't like.

Comment author: rwallace 05 November 2010 12:13:14AM 0 points [-]

I'm resisting the temptation to say "trust me, I'm an AGI researcher" :-) Bear in mind that my bottom line was actually the pro "AI go foom" side; it's still what I would like to believe.

But my theory is clearly falsifiable. I stand by my position that it's fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.

Comment author: wedrifid 05 November 2010 12:39:49AM *  12 points [-]

I'm resisting the temptation to say "trust me, I'm an AGI researcher" :-)

But barely. ;)

You would not believe how little that would impress me. Well, I suppose you would - I've been talking with XiXi about Ben, after all. I wouldn't exactly say that your status incentives promote neutral reasoning on this position - or Robin on the same. It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.

Bear in mind that my bottom line was actually the pro "AI go foom" side; it's still what I would like to believe.

You are trying to create AGI without friendliness and you would like to believe it will go foom? And this is supposed to make us trust your judgement with respect to AI risks?

Incidentally, 'the bottom line' accusation here was yours, not the other way around. The reference was to question its premature use as a fully general counterargument.

But my theory is clearly falsifiable. I stand by my position that it's fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.

We are talking here about predictions of the future. Predictions. That's an important keyword that is related to falsifiability. Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.

You just tagged teamed one general counterargument out to replace it with a new one. Unfalsifiability has a clear meaning when it comes to creating and discussing theories and it is inapplicable here to the point of utter absurdity. Predictions, for crying out loud.

Comment author: rwallace 05 November 2010 03:45:15AM 5 points [-]

I wouldn't exactly say that your status incentives promote neutral reasoning on this position

No indeed, they very strongly promote belief in AI foom - that's why I bought into that belief system for a while, because if true, it would make me a potential superhero.

It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.

Nope, it's exactly in the core of my expertise. Not that I'm expecting you to believe my conclusions for that reason.

You are trying to create AGI without friendliness and you would like to believe it will go foom?

When I believed in foom, I was working on Friendly AI. Now that I no longer believe that, I've reluctantly accepted human level AI in the near future is not possible, and I'm working on smarter tool AI instead - well short of human equivalence, but hopefully, with enough persistence and luck, better than what we have today.

We are talking here about predictions of the future. Predictions. That's an important keyword that is related to falsifiability.

That is what falsifiability refers to, yes.

My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement.

Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.

Are you saying your theory makes no other predictions than this?

Comment author: wedrifid 05 November 2010 04:13:08AM *  4 points [-]

Are you saying your theory makes no other predictions than this?

RWallace, you made a suggestion of unfalsifiabiity, a ridiculous claim. I humored you by giving the most significant, obvious and overwhelmingly critical way to falsify (or confirm) the theory. You now presume to suggest that such a reply amounts to a claim that this is the only prediction that could be made. This is, to put it in the most polite terms I am willing, disingenuous.

Comment author: rwallace 05 November 2010 04:38:50AM -1 points [-]

-sigh-

This crap goes on year after year, decade after bloody decade. Did you know the Singularity was supposed to happen in 2000? Then in 2005. Then in 2010. Guess how many Singularitarians went "oh hey, our predictions keep failing, maybe that's evidence our theory isn't actually right after all"? If you guessed none at all, give yourself a brownie point for an inspired guess. It's like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go "well our date was wrong, but that doesn't mean it's not going to happen, of course it is, Real Soon Now." Every time we actually try to do any recursive self-improvement, it fails to do anything like what the AI foom crowd says it should do, but of course, it's never "well, maybe recursive self-improvement isn't all it's cracked up to be," it's always "your faith wasn't strong enough," oops, "you weren't using enough of it," or "that's not the right kind" or some other excuse.

That's what I have to deal with, and when I asked you for a prediction, you gave me the usual crap about oh well you'll see when the Apocalypse comes and we all die, ha ha. And that's the most polite terms I'm willing to put it in.

I've made it clear how my theory can be falsified: demonstrate recursive self-improvement doing something beyond the curve of capability. Doesn't have to be taking over the world, just sustained improvement beyond what my theory says should be possible.

If you're willing to make an actual, sensible prediction of RSI doing something, or some other event (besides the Apocalypse) coming to pass, such that if it fails to do that, you'll agree your theory has been falsified, great. If not, fine, I'll assume your faith is absolute and drop this debate.

Comment author: shokwave 05 November 2010 05:23:29AM 7 points [-]

It's like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go "well our date was wrong, but that doesn't mean it's not going to happen, of course it is, Real Soon Now."

That the Singularity concept pattern-matches doomsday cults is nothing new to anyone here. You looked further into it and declared it false, wedrifid and others looked into it and declared it possible. The discussion is now about evidence between those two points of view. Repeating that it looks like a doomsday cult is taking a step backwards, back to where we came to this discussion from.

Comment author: JoshuaZ 05 November 2010 05:27:00AM 8 points [-]

rwallace's argument isn't centering on the standard argument that makes it look like a doomsday cult. He's focusing on an apparent repetition of predictions while failing to update when those predictions have failed. That's different than the standard claim about why Singularitarianism pattern matches with doomsday cults, and should, to a Bayesian, be fairly disturbing if he is correct about such a history.

Comment author: shokwave 05 November 2010 06:25:03AM 6 points [-]

Fair enough. I guess his rant pattern-matched the usual anti-doomsday-cult stuff I see involving the singularity. Keep in mind that, as a Bayesian, it is possible to adjust the value of those people making the predictions instead of the likelihood of the event. Certainly, that is what I have done; I care less for predictions, even from people I trust to reason well, because a history of failing predictions has taught me not that predicted events don't happen, but rather that predictions are full of crap. This has the converse effect of greatly reducing the value of (in hindsight) correct predictions; which seems to be a pretty common failure mode for a lot of belief mechanisms: that a correct prediction alone is enough evidence. I would require the process by which the prediction was produced to consistently predict correctly.

Comment author: JoshuaZ 05 November 2010 04:41:24AM 4 points [-]

So, I'm vaguely aware of Singularity claims for 2010. Do you have citations for people making such claims that it would happen in 2000 or 2005?

I agree that pushing something farther and farther into the future is a potential warning sign.

Comment author: timtyler 05 November 2010 09:49:56AM *  7 points [-]

In the "The Maes-Garreau Point" Kevin Kelly lists poorly-referenced predictions of "when they think the Singularity will appear" of 2001, 2004 and 2005 - by Nick Hogard, Nick Bostrom and Eleizer Yudkowsky respectively.

Comment author: steven0461 05 November 2010 07:49:08PM *  5 points [-]

I agree that pushing something farther and farther into the future is a potential warning sign.

But only a potential warning sign -- fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.

Comment author: JoshuaZ 05 November 2010 09:00:59PM 3 points [-]

But only a potential warning sign -- fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.

Right, but we expect that for the promethium atom. If physicists had predicted that a certain radioactive sample would decay in a fixed time, and they kept pushing up the time for when it would happen, and didn't alter their hypotheses at all, I'd be very worried about the state of physics.

Comment author: rwallace 05 November 2010 04:52:48AM 0 points [-]

Not off the top of my head, which is one reason I didn't bring it up until I got pissed off :) I remember a number of people predicting 2000, over the last decades of the 20th century, I think Turing himself was one of the earliest.

Comment author: JoshuaZ 05 November 2010 04:57:21AM *  4 points [-]

Turing never discussed much like a Singularity to my knowledge. What you may be thinking of is how in his original article proposing the Turing Test he said that he expected that it would take around fifty years for machines to pass the Turing Test. He wrote the essay in 1950. But, Turing's remark is not the same claim as a Singularity occurring in 2000. Turing was off for when we'd have AI. As far as I know, he didn't comment on anything like a Singularity.

Comment author: rwallace 05 November 2010 05:02:06AM 0 points [-]

Ah, that's the one I'm thinking of -- he didn't comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn't save any citations at the time and a quick Google search didn't find any, which is one of the reasons I'm not writing a post on failed Singularity predictions.

Comment author: wedrifid 05 November 2010 05:25:58AM *  4 points [-]

The pattern you are completing here has very little relevance to the actual content of the conversation. The is no prediction here about the date of a possible singularity and, for that matter, no mention of how probable it is. When, or if, someone such as yourself creates a human level general intelligent agent and releases it then that will go a long way towards demonstrating that one of the theories is false.

You have iterated through a series of argument attempts here, abandoning each only to move to another equally flawed. The current would appear to be 'straw man'... and not a particularly credible straw man at that. (EDIT: Actually, no you have actually kept the 'unfalsifiable' thing here, somehow.)

Your debating methods are not up to the standards that are found to be effective and well received on lesswrong.

Comment author: magfrump 05 November 2010 09:33:29AM 9 points [-]

The way that this thread played out bothered me.

I feel like I am in agreement that computer hardware plus human algorithm equals FOOM. Just as hominids improved very steeply as a few bits were put in place which may or may not correspond to but probably included symbolic processing, I think that putting an intelligent algorithm in place on current computers is likely to create extremely rapid advancement.

On the other hand, it's possible that this isn't the case. We could sit around all day and play reference-class tennis, but we should be able to agree that there EXIST reference classes which provide SOME evidence against the thesis. The fact that fields like CAD have significant bottlenecks due to compiling time, for example, indicates that some progress currently driven by innovation still has a machine bottleneck and will not experience a recursive speedup when done by ems. The fact that in fields like applied math, new algorithms which are human insights often create serious jumps is evidence that these fields will experience recursive speedups when done by ems.

The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps. It was a pretty mundane comment, and when I saw it it had over ten upvotes I was disappointed and reminded of RationalWiki's claims that the site is a personality cult. rwallace responded by asking Eliezer to live up to the "standards that are found to be effective and well received on lesswrong," albeit he asked in a fairly snarky way. You not only responded with more snark, but (a) represented a significant "downgrade" from a real response from Eliezer, giving the impression that he has better things to do than respond to serious engagements with his arguments, and (b) did not reply with a serious engagement of the arguments, such as an acknowledgement of a level of evidence.

You could have responded by saying that "fields of knowledge relevant to taking over the world seem much more likely to me to be social areas where big insights are valuable and less like CAD where compiling processes take time. Therefore while your thesis that many areas of an em's speedup will be curve-constrained may be true, it still seems unlikely to effect the probability of a FOOM."

In which case you would have presented what rwallace requested--a possibility of falsification--without any need to accept his arguments. If Eliezer had replied in this way in the first place, perhaps no one involved in this conversation would have gotten annoyed and wasted the possibility of a valuable discussion.

I agree that this thread of comments has been generally lacking in the standards of argument usually present on LessWrong. But from my perspective you have not been bringing the conversation up to a higher level as much as stoking the fire of your initial disagreement.

I am disappointed in you, and by the fact that you were upvoted while rwallace was downvoted; this seems like a serious failure on the part of the community to maintain its standards.

To be clear: I do not agree with rwallace's position here, I do not think that he was engaging at the level that is common and desirable here. But you did not make it easier for him to do that, you made it harder, and that is far more deserving of downvotes.

Comment author: Eliezer_Yudkowsky 05 November 2010 06:07:22PM 0 points [-]

Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps.

I did not say that. I said that symbolic logic probably wasn't It. You made up your own reason why, and a poor one.

Comment author: shokwave 05 November 2010 06:55:07PM 4 points [-]

Out of morbid curiosity, what is your reason for symbolic logic not being it?

Comment author: magfrump 05 November 2010 09:54:15PM 2 points [-]

That's fair. I apologize, I shouldn't have put words in your mouth. That was the impression I got, but it was unfounded to say it came from you.

Comment author: wedrifid 05 November 2010 10:39:32AM 0 points [-]

I am disappointed in you

This would seem to suggest that you expected something different from me, that is better according to your preferences. This surprises me - I think my comments here are entirely in character, whether that character is one that appeals to you or not. The kind of objections I raise here are also in character. I consistently object to arguments of this kind and used in the way they are here. Perhaps ongoing dislike or disrespect would be more appropriate than disappointment?

Comment author: magfrump 05 November 2010 05:18:52PM 2 points [-]

You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.

I am disappointed that such a well-established member of our community would behave in the way you did; your 6000 karma gives me the expectations that have not been met.

I realize that you may represent a slightly different slice of the LessWrong personality spectrum that I do, and this probably accounts for some amount of the difference, but this appeared to me to be a breakdown of civility which is not or at least should not be dependent on your personality.

I don't know you well enough to dislike you. I've seen enough of your posts to know that you contribute to the community in a positive way most of the time. Right now it just feels like you had a bad day and got upset about the thread and didn't give yourself time to cool off before posting again. If this is a habit for you, then it is my opinion that it is a bad habit and I think you can do better.

Comment author: Larks 05 November 2010 02:50:00PM 0 points [-]

If you think that most Singularities will be Unfreindly, the Anthropic Shadow means that their absense from our time-line isn't very strong evidence against their being likely in the future: no matter what proportion of the multiverse sees the light cone paperclipped in 2005, all the observers in 2010 will be in universes that weren't ravaged.

Comment author: rwallace 05 November 2010 08:05:53PM 1 point [-]

This is true if you think the maximum practical speed of interstellar colonization will be extremely close to (or faster than) the speed of light. (In which case, it doesn't matter whether we are talking Singularity or not, friendly or not, only that colonization suppresses subsequent evolution of intelligent life, which seems like a reasonable hypothesis.)

If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don't Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn't yet reached us.

Of course there is as yet no proof of either hypothesis, but such reasonable estimates as we currently have, suggest the latter.

Comment author: Document 05 November 2010 11:48:06PM *  0 points [-]

If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don't Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn't yet reached us.

Nitpick: If the civilization is spreading by SETI attack, observing them could be the first stage of being colonized by them. But I think the discussion may be drifting off-point here. (Edited for spelling.)

Comment author: wedrifid 05 November 2010 04:15:25AM *  0 points [-]

Nope, it's exactly in the core of my expertise.

You are not an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.

Comment author: JoshuaZ 05 November 2010 04:18:28AM 3 points [-]

You are no an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.

In fairness, I'm not sure anyone is really an expert on this (although this doesn't detract from your point at all.)

Comment author: wedrifid 05 November 2010 04:26:01AM *  5 points [-]

In fairness, I'm not sure anyone is really an expert on this (although this doesn't detract from your point at all.)

You are right, and I would certainly not expect anyone to have such expertise for me to take their thoughts seriously. I am simply wary of Economists (Robin) or AGI creator hopefuls claiming that their expertise should be deferred to (only relevant here as a hypothetical pseudo-claim). Professions will naturally try to claim more territory than would be objectively appropriate. This isn't because the professionals are actively deceptive but rather because it is the natural outcome of tribal instincts. Lets face it - intellectual disciplines and fields of expertise are mostly about pissing on trees with but with better hygiene.

Comment author: XiXiDu 05 November 2010 06:02:24PM 0 points [-]

Predictions, for crying out loud.

Yes, but why would the antipredictions of AGI researcher not outweigh yours as they are directly inverse? Further, if your predictions are not falsifiable then they are by definition true and cannot be refuted. Therefore it is not unreasonable to ask for what would prematurely disqualify your predictions so as to be able to argue based on diverging opinions here. Otherwise, as I said above, we'll have two inverse predictions outweigh each other, and not the discussion about risk estimations we should be having.

Comment author: wedrifid 05 November 2010 08:47:29PM 0 points [-]

The claim being countered was falsifiability. Your reply here is beyond irrelevant to the comment you quote.

Comment author: XiXiDu 06 November 2010 09:26:35AM *  0 points [-]

rwallace said it all in his comment that has been downvoted. Since I'm unable to find anything wrong with his comment and don't understand yours at all, which has for unknown reasons be upvoted, there's no way for me to counter what you say besides by what I've already said.

Here's a wild guess of what I believe to be the positions. rwallace asks you what information would make you update or abandon your predictions. You in turn seem to believe that predictions are just that, the utterance of that might be possible, unquestionable and not subject to any empirical criticism.

I believe I'm at least smarter than the general public, although I haven't read a lot of Less Wrong yet. Further I'm always willing to announce that I have been wrong and to change my mind. This should at least make you question your communication skills regarding outsiders, a little bit.

Unfalsifiability has a clear meaning when it comes to creating and discussing theories and it is inapplicable here to the point of utter absurdity.

Theories are collections or proofs and a hypothesis is a prediction or collection of predictions and must be falsifiable or proven to become a collection of proofs that is a theory. It is not absurd at all to challenge predictions based on their refutability, as any prediction that isn't falsifiable will be eternal and therefore useless.

Comment author: wedrifid 06 November 2010 10:34:37AM *  -1 points [-]

The wikipedia article on falsifiablility would be a good place to start if you wish to understand what is wrong with way falsification has been used (or misused) here. With falsifiability understood, seeing the problem should be straightforward.

Comment author: XiXiDu 06 November 2010 12:47:28PM 1 point [-]

I'll just back out and withdraw my previous statements here. I have already been reading that Wiki entry when you replied. It would certainly take too long to figure out where I might be wrong here. I thought falsifiablility has been sufficiently clear to me to ask for what would change someones mind if I believe that a given prediction is sufficiently unspecific.

I have to immerse myself into the shallows that are the foundations of falsifiability (philosophy). I have done so in the past and will continue to do so, but that will take time. Nothing so far has really convinced me that a unfalsifiable idea can provide more than hints of what might be possible and therefore something new to try. Yet empirical criticism, in the form of the eventual realization of ones ideas, or a prove of contradiction (respectively inconsistency), seems to be the best bedding of any truth-value (at least in retrospect to a prediction). That is why I like to ask for what information would change ones mind about an idea, prediction or hypothesis. I call this falsifiability. If one replied, "nothing falsifiability is misused here", I would conclude that his idea is unfalsifiable. Maybe wrongly so!

Comment author: wedrifid 07 November 2010 07:49:11AM 0 points [-]

Thou art wise.