lavalamp comments on Timeless Identity - Less Wrong

23 Post author: Eliezer_Yudkowsky 03 June 2008 08:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (234)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: lavalamp 02 October 2013 12:13:58AM 1 point [-]

I think we have the same model of the situation, but I feel compelled to normalize my probability. A guess as to why:

I can rephrase Mark's question as, "In 10 hours, will you remember having gone to the beach or having bowled?" (Assume the simulation will continue running!) There'll be a you that went bowling and a you that went to the beach, but no single you that did both of those things. Your successive wakings example doesn't have this property.

I suppose I answer 50% to indicate my uncertainty about which future self we're talking about, since there are two possible referents. Maybe this is unhelpful.

Comment author: TheOtherDave 02 October 2013 12:44:38AM *  1 point [-]

Yes, that seems to be what's going on.

That said, normalizing my probability as though there were only going to be one of me at the end of the process doesn't seem at all compelling to me. I don't have any uncertainty about which future self we're talking about -- we're talking about both of them.

Suppose that you and your husband are planning to take the day off tomorrow, and he is planning to go bowling, and you are planning to go to the beach, and I ask the two of you "what's y'all's probability that one of y'all will go bowling, and what's y'all's probability that one of y'all will go to the beach?" It seems the correct answers to those questions will add up to more than 1, even though no one person will experience bowling AND going to the beach. In 10 hours, one of you will will remember having gone to the beach, and one will remember having bowled.

This is utterly unproblematic when we're talking about two people.

In the duplication case, we're still talking about two people, it's just that right now they are both me, so I get to answer for both of them. So, in 10 hours, I (aka "one of me") will remember having gone to the beach. I will also remember having bowled. I will not remember having gone to the beach and having bowled. And my probabilities add up to more than 1.

I recognize that it doesn't seem that way to you, but it really does seem like the obvious way to think about it to me.

Comment author: lavalamp 02 October 2013 12:59:53AM 0 points [-]

I recognize that it doesn't seem that way to you, but it really does seem like the obvious way to think about it to me.

I think your description is coherent and describes the same model of reality I have. :)

Comment author: [deleted] 02 October 2013 12:52:47AM *  0 points [-]

I can rephrase Mark's question as, "In 10 hours, will you remember having gone to the beach or having bowled?"

Yes. Probabilities aside, this is what I was asking.

I suppose I answer 50% to indicate my uncertainty about which future self we're talking about, since there are two possible referents.

I was asking a disguised question. I really wanted to know: "which of the two future selfs do you identify with, and why?"

Comment author: lavalamp 02 October 2013 12:55:33AM *  1 point [-]

I was asking a disguised question. I really wanted to know: "which of the two future selfs do you identify with, and why?"

Oh, that's easy. Both of them, equally. Assuming accurate enough simulations etc., of course.

ETA: Why? Well, they'll both think that they're me, and I can't think of a way to disprove the claim of one without also disproving the claim of the other.

Comment author: [deleted] 02 October 2013 08:00:20PM -1 points [-]

ETA: Why? Well, they'll both think that they're me, and I can't think of a way to disprove the claim of one without also disproving the claim of the other.

Any of the models of consciousness-as-continuity would offer a definitive prediction.

Comment author: lavalamp 02 October 2013 08:24:16PM -1 points [-]

Any of the models of consciousness-as-continuity would offer a definitive prediction.

IMO, there literally is no fact of the matter here, so I will bite the bullet and say that any model that supposes there is one is wrong. :) I'll reconsider if you can point to an objective feature of reality that changes depending on the answer to this. (So-and-so will think it to be immoral doesn't count!)

Comment author: [deleted] 02 October 2013 09:10:41PM *  0 points [-]

I won't because that's not what I'm arguing. My position is that subjective experience has moral consequences, and therefore matters.

PS: The up/down karma vote isn't a record of what you agree with, but whether a post has been reasonably argued.

Comment author: lavalamp 02 October 2013 09:23:44PM 0 points [-]

I won't because that's not what I'm arguing. My position is that subjective experience has moral consequences, and therefore matters.

OK, that's fine, but I'm not convinced-- I'm having trouble thinking of something that I consider to be a moral issue that doesn't have a corresponding consequence in the territory.

PS: That downvote wasn't me. I'm aware of how votes work around here. :)

Comment author: [deleted] 02 October 2013 09:35:11PM *  -1 points [-]

Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele's work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn't want to “die” every night when the lights go off? What would it do then?

Comment author: lavalamp 02 October 2013 09:50:52PM 1 point [-]

OK, in a hypothetical world where somehow pausing a conscious computation--maintaining all data such that it could be restarted losslessly--is murder, those are concerns. Agreed. I'm not arguing against that.

My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I'm looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that's most of why we care about murder and what makes it a moral issue.)

Comment author: shminux 02 October 2013 10:08:08PM *  -1 points [-]

As Eliezer mentioned in his reply (in different words), if power cycling is death, what's the shortest suspension time that isn't? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.

Edit: it's actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.

Comment author: [deleted] 02 October 2013 10:24:49PM *  -1 points [-]

What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.

But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).

Comment author: TheOtherDave 02 October 2013 10:22:38PM 1 point [-]

For many people, the up/down karma vote is a record of what we want more/less of.

Comment author: wedrifid 03 October 2013 06:03:28AM 1 point [-]

PS: The up/down karma vote isn't a record of what you agree with, but whether a post has been reasonably argued.

It is neither of those things. This isn't debate club. We don't have to give people credit for finding the most clever arguments for a wrong position.

I make no comment about the subject of debate is in this context (I don't know or care which party is saying crazy things about 'conciousness'). I downvoted the parent specifically because it made a normative assertion about how people should use the karma mechanism which is neither something I support nor an accurate description of an accepted cultural norm. This is an example of voting being used legitimately in a way that is nothing to do with whether the post has been reasonably argued.

Comment author: [deleted] 03 October 2013 06:53:58AM 1 point [-]

I did use the term "reasonably argued" but I didn't mean clever. Maybe "rationally argued"? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.

I gave you an upvote for explaining your down vote.

Comment author: wedrifid 03 October 2013 11:19:17AM 1 point [-]

I did use the term "reasonably argued" but I didn't mean clever. Maybe "rationally argued"? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.

You are right, 'clever' contains connotations that you wouldn't intend. I myself have used 'clever' as term of disdain and I don't want to apply that to what you are talking about. Let's say stick with either of the terms you used and agree that we are talking about arguments that are sound, cogent and reasonable rather than artful rhetoric that exploits known biases in human social behaviour to score persuasion points. I maintain that even then down-votes are sometimes appropriate. Allow me to illustrate.

There are two outwardly indistinguishable boxes with buttons that display heads or tails when pressed. You know that one of the boxes returns true 70% of the time, the other returns heads 40% of the time. A third party, Joe, has experimented with the first box three times and tells you that each time it returned true. This represents an argument that the first box is the "70%" box. Now, assume that I have observed the internals of the boxes and know that the first box is, in fact, the 40% box.

Whether I downvote Joe's comment depends on many things. Obviously, tone matters a lot, as does my impression of whether Joe's bias is based on dis-ingenuity or more innocent ignorance. But even in the case when Joe is arguing in good faith there are some cases where a policy attempting to improve the community will advocate downvoting the contribution. For example if there is a significant selection bias in what kind of evidence people like Joe have exposed themselves to then popular perception after such people share their opinions will tend to be even more biased than the individuals alone. In that case downvoting Joe's comment improves the discussion. The ideal outcome would be for Joe to learn to stfu until he learns more.

More simply I observe that even the most 'rational' of arguments can be harmful if the selection process for the creation and repetition of those arguments is at all biased.