Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Tyrrell_McAllister2 comments on Cascades, Cycles, Insight... - Less Wrong

13 Post author: Eliezer_Yudkowsky 24 November 2008 09:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tyrrell_McAllister2 24 November 2008 11:35:19AM 4 points [-]

I've been following along and enjoying the exchange so far, but it doesn't seem to be getting past the "talking past each other" phase.

For example, the Fermi story works as an example of a cycle as a source of discontinuity. But I don't see how it establishes anything that Robin would have disputed. I guess that Eliezer would say that Robin has been inattentive to its lessons. But he should then point out where exactly Robin's reasoning fails to take those lessons into account. Right now, he just seems to be pointing to an example of cycles and say, "Look, a cycle causing discontinuity. Does that maybe remind you of something that perhaps your theorizing has ignored?" I imagine that Robin's response will just be to say, "No," and no progress will have been made.

And, of course, once the Fermi story is told, I can't help but think of how else it might be analogous to the current discussion. When I look at the Fermi story, what I see is this: Fermi took a powerful model of reality and made the precise prediction that something huge would happen between layers 56 and 57, whereas someone without that model would have just thought, "I don't see how 57 is so different from 56." What I see happening in this conversation is that Robin says, "Using a powerful model of reality, I predict that an event, which Eliezer thinks is very likely, will actually happen only with probability <10%." (I haven't yet seen a completely explicit consensus account of Robin and Eliezer's disagreement, but I gather that it's something like that.) And Eliezer's replies seem to me to be of the form "You shouldn't be so confident in your model. Previous black swans show how easily predictions based on past performance can be completely wrong."

I concede that the analogy between the Fermi story and the current conversation is not the best fit. But if I pursue it, what I get is this: Robin is in a sense claiming to be the Fermi in this conversation. He says that he has a well-established body of theory that makes a certain prediction: that Eliezer's scenario has very low probability of happening.

Eliezer, on the other hand, is more like someone who, when presented with Fermi's predictions (before they'd been verified) might have said, "How can you be so confident in your theory? Don't you realize that a black swan could come and upset it all? For example, maybe a game-changing event could happen between layers 32 and 33, preventing layer 57 from even occurring. Have you taken that possibility into account? In fact, I expect that something will happen at some point to totally upset your neat little calculations"

Such criticisms should be backed up with an account of where, exactly, Fermi is making a mistake by being so confident in his prediction about layer 57. Similarly, Eliezer should say where exactly he sees the flaws in Robin's specific arguments. Instead, we get these general exhortations to be wary of black swans. Although such warnings are important, I don't see how they cash out in this particular case as evidence that Robin is the one who is being too confident in his predictions.

In other words, Robin and Eliezer have a disagreement that (I hope) ultimately cashes out as a disagreement about how to distribute probability over the possible futures. But Eliezer's criticisms of Robin's methods are all very general; they point to how hard it is to make such predictions. He argues, in a vague and inexact way, that predictions based on similar methods would have gone wrong in the past. But Eliezer seems to dodge laying out exactly where Robin's methods go wrong in this particular case and why Eliezer's succeed.

Again, the kinds of general warnings that Eliezer gives are very important, and I enjoy reading them. It's valuable to point out all the various quarters from which a black swan could arrive. But, for the purposes of this argument, he should point out how exactly Robin is failing to heed these warnings sufficiently. Of course, maybe Eliezer is getting to that, but some assurance of that would be nice. I have a large appetite for Eliezer's posts, construed as general advice on how to think. But when I read them as part of this argument with Robin, I keep waiting for him to get to the point.