In response to Higher Purpose
Comment author: Tyrrell_McAllister2 23 January 2009 03:17:58PM 0 points [-]

Daniel Dennett's standard response to the question "What's the secret of happiness" is "The secret of happiness is to find something more important than you are and dedicate your life to it."

I think that this avoids Eliezer's criticism that "you can't deliberately pursue 'a purpose that takes you outside yourself', in order to take yourself outside yourself. That's still all about you." Something can be more important than you and yet include you. Depending on your values, the future of the human race itself could serve as an example. It would seem also to be still an available "hedonic accessory" in any eutopia that includes humanity in some form.

Comment author: Tyrrell_McAllister2 04 December 2008 04:07:51PM 1 point [-]

But has that been disproved? I don't really know. But I would imagine that Moravec could always append, ". . . provided that we found the right 10 trillion calculations." Or am I missing the point?

In response to Thanksgiving Prayer
Comment author: Tyrrell_McAllister2 28 November 2008 05:37:30PM 2 points [-]

Here's a Daniel Dennett essay that seems appropriate:

THANK GOODNESS!

In response to Thanksgiving Prayer
Comment author: Tyrrell_McAllister2 28 November 2008 10:13:53AM 0 points [-]

Maybe it was the categorical nature of "no danger whatsoever" that led to the comparisons to religion. Given the difficulty of predicting anyone's psychological development, and given that you yourself say that you've seen multiple lapses before, what rational reason could you have for such complete confidence? Of course, it's true that there are things besides religion that cause people to make predictions with probability 1 (which, you must concede, is a plausible reading of "no danger whatsoever"). But, in human affairs, with our present state of knowledge, can such predictions ever be entirely reasonable?

Comment author: Tyrrell_McAllister2 24 November 2008 06:17:09PM 0 points [-]

anon and Chris Hibbert, I definitely didn't mean to say that Robin is claiming to be working with as much certainty as Fermi could claim. I didn't mean to be making any claim about the strength or content of Robin's argument at all, other than that he's assigning low probability to something to which Eliezer assigns high probability.

Like I said, the analogy with the Fermi story isn't very good. My point was just that a critique of Fermi should have addressed his calculations, pointing out where exactly he went wrong (if such a point could be found). Eliezer, in contrast, isn't really grappling with Robin's theorizing in a direct way at all. I know that the analogy isn't great for many reasons. One is that Robin's argument is in a more informal language than mathematical physics. But still, I'd like to see Eliezer address it with more directness.

As it is, this exchange doesn't really read like a conversation. Or, it reads like Robin wants to engage in a conversation. Eliezer, on the other hand, seems to think that he has identified flaws in Robin's thinking, but the only way he can see to address them is by writing about how to think in general, or at least how to think about a very broad class of questions, of which this issue is only a very special case.

I gather that, in Eliezer's view, Robin's argument is so flawed that there's no way for Eliezer to address it on its own terms. Rather, he needs to build a solid foundation for reasoning about these things from the ground up. The Proper Way to answer this question will then be manifest, and Robin's arguments will fall by the wayside, clearly wrong simply by virtue of not being the Proper Way.

Eliezer may be right about that. Indeed, I think it's a real possibility. Maybe that's really the only way that these kinds of things can be settled. But it's not a conversation. And maybe that will be the lesson that comes out of this. Maybe conversation is overrated.

None of this is supposed to be a criticism of either Eliezer's or Robin's side of this specific issue. It's a criticism of how the conversation is being carried out. Or maybe just an expression of impatience.

Comment author: Tyrrell_McAllister2 24 November 2008 11:35:19AM 4 points [-]

I've been following along and enjoying the exchange so far, but it doesn't seem to be getting past the "talking past each other" phase.

For example, the Fermi story works as an example of a cycle as a source of discontinuity. But I don't see how it establishes anything that Robin would have disputed. I guess that Eliezer would say that Robin has been inattentive to its lessons. But he should then point out where exactly Robin's reasoning fails to take those lessons into account. Right now, he just seems to be pointing to an example of cycles and say, "Look, a cycle causing discontinuity. Does that maybe remind you of something that perhaps your theorizing has ignored?" I imagine that Robin's response will just be to say, "No," and no progress will have been made.

And, of course, once the Fermi story is told, I can't help but think of how else it might be analogous to the current discussion. When I look at the Fermi story, what I see is this: Fermi took a powerful model of reality and made the precise prediction that something huge would happen between layers 56 and 57, whereas someone without that model would have just thought, "I don't see how 57 is so different from 56." What I see happening in this conversation is that Robin says, "Using a powerful model of reality, I predict that an event, which Eliezer thinks is very likely, will actually happen only with probability <10%." (I haven't yet seen a completely explicit consensus account of Robin and Eliezer's disagreement, but I gather that it's something like that.) And Eliezer's replies seem to me to be of the form "You shouldn't be so confident in your model. Previous black swans show how easily predictions based on past performance can be completely wrong."

I concede that the analogy between the Fermi story and the current conversation is not the best fit. But if I pursue it, what I get is this: Robin is in a sense claiming to be the Fermi in this conversation. He says that he has a well-established body of theory that makes a certain prediction: that Eliezer's scenario has very low probability of happening.

Eliezer, on the other hand, is more like someone who, when presented with Fermi's predictions (before they'd been verified) might have said, "How can you be so confident in your theory? Don't you realize that a black swan could come and upset it all? For example, maybe a game-changing event could happen between layers 32 and 33, preventing layer 57 from even occurring. Have you taken that possibility into account? In fact, I expect that something will happen at some point to totally upset your neat little calculations"

Such criticisms should be backed up with an account of where, exactly, Fermi is making a mistake by being so confident in his prediction about layer 57. Similarly, Eliezer should say where exactly he sees the flaws in Robin's specific arguments. Instead, we get these general exhortations to be wary of black swans. Although such warnings are important, I don't see how they cash out in this particular case as evidence that Robin is the one who is being too confident in his predictions.

In other words, Robin and Eliezer have a disagreement that (I hope) ultimately cashes out as a disagreement about how to distribute probability over the possible futures. But Eliezer's criticisms of Robin's methods are all very general; they point to how hard it is to make such predictions. He argues, in a vague and inexact way, that predictions based on similar methods would have gone wrong in the past. But Eliezer seems to dodge laying out exactly where Robin's methods go wrong in this particular case and why Eliezer's succeed.

Again, the kinds of general warnings that Eliezer gives are very important, and I enjoy reading them. It's valuable to point out all the various quarters from which a black swan could arrive. But, for the purposes of this argument, he should point out how exactly Robin is failing to heed these warnings sufficiently. Of course, maybe Eliezer is getting to that, but some assurance of that would be nice. I have a large appetite for Eliezer's posts, construed as general advice on how to think. But when I read them as part of this argument with Robin, I keep waiting for him to get to the point.

Comment author: Tyrrell_McAllister2 20 November 2008 07:48:44AM 1 point [-]

Tim Tyler,

I don't yet see why exactly Eliezer is dwelling on the origin of replicators.

Check with the title: if you are considering the possibility of a world takeover, it obviously pays to examine the previous historical genetic takeovers.

Right. I get the surface analogy. But it seems to break down when I look at its deeper structure.

Comment author: Tyrrell_McAllister2 19 November 2008 05:59:15PM 0 points [-]

Oops; I should have noted that I added emphasis to those quotes of Eliezer. Sorry.

Comment author: Tyrrell_McAllister2 19 November 2008 05:57:24PM 2 points [-]

I don't yet see why exactly Eliezer is dwelling on the origin of replicators. As Robin said, it would have been very surprising if Robin had disagreed with any of it.

I guess that Eliezer's main points were these: (1) The origin of life was an event where things changed abruptly in a way that wouldn't have been predicted by extrapolating from the previous 9 billion years. Moreover, (2) pretty much the entire mass of the universe, minus a small tidal pool, was basically irrelevant to how this abrupt change played out and continues to play out. That is, the rest of the universe only mattered in regards to its gross features. It was only in that tidal pool that the precise arrangement of molecules had and will have far-reaching causal implications for the fate of the universe.

Eliezer seems to want to argue that we should expect something like this when the singularity comes. His conclusion seems to be that it is futile to survey the universe as it is now to try to predict detailed features of the singularity. For, if the origin of life is any guide, practically all detailed features of the present universe will prove irrelevant. Their causal implications will be swept aside by the consequences of some localized event that is hidden in some obscure corner of the world, below our awareness. Since we know practically nothing about this event, our present models can't take it into account, so they are useless for predicting the details of its consequences. That, at any rate, is what I take his argument to be.

There seems to me to be a crucial problem with this line of attack on Robin's position. As Eliezer writes of the origin of life,

The first replicator was the first great break in History - the first Black Swan that would have been unimaginable by any surface analogy. No extrapolation of previous trends could have spotted it - you'd have had to dive down into causal modeling, in enough detail to visualize the unprecedented search.

Not that I'm saying I would have guessed, without benefit of hindsight - if somehow I'd been there as a disembodied and unreflective spirit, knowing only the previous universe as my guide - having no highfalutin' concepts of "intelligence" or "natural selection" because those things didn't exist in my environment, and I had no mental mirror in which to see myself - and indeed, who should have guessed it with short of godlike intelligence? When all the previous history of the universe contained no break in History that sharp? The replicator was the first Black Swan.

The difference with Robin's current position, if I understand it, is that he doesn't see our present situation as one in which such a momentous development is inconceivable. On the contrary, he conceives of it as happening through brain-emulation.

Eliezer seems to me to establish this much. If our present models did not predict an abrupt change on the order of the singularity, and if such a change nonetheless happens, then it will probably spring out of some very local event that wipes out the causal implications of all but the grossest features of the rest of the universe. However, Robin believes that our current models already predict a singularity-type event. If he's right (a big if!), then a crucial hypothesis of Eliezer's argument fails to obtain. The analogy with the origin of life that Eliezer makes in this post breaks down.

So the root of the difference between Eliezer and Robin seems to be this: Do our current models already give some significant probability to the singularity arising out of processes that we already know something about, e.g., the development of brain emulation? If so, then the origin of life was a crucially different situation, and we can't draw the lessons from it that Eliezer wants to.

Comment author: Tyrrell_McAllister2 23 October 2008 05:28:00PM 0 points [-]

gaffa: A heavy obstacle for me is that I have a hard time thinking in terms of math, numbers and logic. I can understand concepts on the superficial level and kind of intuitively "feel" their meaning in the back of my mind, but I have a hard time bringing the concepts into the frond of my mind and visualize them in detail using mathematical reasoning. I tend to end up in a sort of "I know that you can calculate X with this information, and knowing this is good enough for me"-state, but I'd like to be in the state where I am using the information to actually calculate the value of X in my head.

I've found that the only to get past this is to practice solving problems a whole bunch. If your brain doesn't already have the skill of looking at a problem and slicing it up into all the right pieces with the right labels so that a solution falls out, then the only way to get it to do that is to practice a lot.

I recommend getting an introductory undergraduate text in whatever field you want to understand mathematically, one with lots of exercises and a solutions manual. Read a chapter and then just start grinding through one exercise after another. On each exercise, give yourself a certain allotted time to try to solve it on your own, maybe 20 or 30 minutes or so. If you haven't solved it before the clock runs out, read the solutions manual and then work through it yourself. Then move on to the next problem, again trying to solve it within an allotted time.

Don't worry too much if the solutions manual whips out some crazy trick that seems totally unmotivated to you. Just make sure that you understand why the trick works, and then move on. Once you see the "trick" enough times, it will start to seem like the obvious thing to try, not a trick at all.

View more: Prev | Next