All of lolbifrons's Comments + Replies

Ah. I guess I'm not sure where to go from here, in that case.

Every level but the last one is supposed to be wrong.

The point is they're supposed to be wrong in a specifically crafted way.

Most of them I start to agree with and then you add a conclusion that I don't think follows from the lead-in.

This is by design. I tried to make the levels mutually exclusive. The way I did this was by having each level add a significant insight to the truth (as I see it) and then say something wrong (as I see it) to constrain any further insight.

I think you've got an unstated and likely incorrect assumption
that "me" in the past, "me" as experiencing something in the moment and "me" in the future are all singular a
... (read more)
2Dagon
I think my main point in most of this is: True! These scenarios don't actually happen (yet) to humans, and you're trying to extrapolate from a fairly poorly defined base case (individual experiential continuity). However, I think most of them dissolve if you believe (as I do) that consciousness is a purely physical reductive phenomenon. Take the analogy of a light bulb, where you duplicate everything including the current (pun intended) state of electrical potential in the wires and element, but then after duplication allow that the future electrical inputs may vary. You can easily answer all of these questions about anticipated light output levels. It's identical at time of duplication, and diverges afterward.
I hope, at least, that this back-and-forth has been useful?

Absolutely. Talking to you was refreshing, and it helped me not only flesh out my ladder but also pin down my beliefs. Thank you for taking time to talk about this stuff.

so I don’t think I have much to add past what we’ve already gone over.

I did make an attempt to address your last reply. If you still feel that way after, let me know.

I don’t see that understanding QM would suffice to grant #3 and #4 without having solved the Hard Problem. Without actually having a full reduction of consciousness, there’s just no way to be certain that the reasoning you provide makes sense

This should change when you understand QM. I was trying to black box it.

And how do we do this? What makes a person “feel like the same person”, “from the inside”, through the passage of time? Do those quoted phrases even make sense? What do they mean, exactly? We really don’t know.
Can we? It seems like we can, bu
... (read more)

Fair enough. I don't see them as gibberish, so treating them that way is hard. I admit I didn't actually see what you meant.

I made an attempt to treat it as a black box in a different thread reply, but I still had to use the language of QM. I might be able to sum it up into short sentences as well, but I wanted to start with some amount of formality and explanation.

3Said Achmiz
Indeed, I’ve now read those comments, and I do appreciate it. As I think we’ve agreed now, further progress requires me to have a good understanding of QM, so I don’t think I have much to add past what we’ve already gone over. I hope, at least, that this back-and-forth has been useful?

We gain these:

1. Everything obeys QM. To wit, nothing can exist anywhere/when that is not describable in the math of QM in principle.

2. If everything obeys QM, consciousness obeys QM.

3. As long as consciousness is not or does not consist of some fundamental element that does not obey QM, there is nothing anywhere that can differentiate between copies in principle in any way besides how we can differentiate between a person in the past and the same person in the future, having taken a mundane trajectory through spacetime. This includes how it "feels ... (read more)

3Said Achmiz
I grant (for the sake of argument) #1 and #2. I don’t see that understanding QM would suffice to grant #3 and #4 without having solved the Hard Problem. Without actually having a full reduction of consciousness, there’s just no way to be certain that the reasoning you provide makes sense. This is in large part this is because the reasoning has “holes” in it—that is, parts which we currently take essentially on faith, pending a resolution of the Hard Problem. Some specifics: And how do we do this? What makes a person “feel like the same person”, “from the inside”, through the passage of time? Do those quoted phrases even make sense? What do they mean, exactly? We really don’t know. Can we? It seems like we can, but… is that just an illusion? Somehow? Why does it seem like consciousness is continuous? Or is that a confused question (as some people indeed seem to claim)? Well, and what if it does? We’re back to the “conditioning on reductionism” thing; until we actually have a full reduction, we just can’t blithely toss about assumptions like this! … actually, we needn’t even go that far. It’s not even certainty of reductionism that you’re suggesting we condition on—it’s certainty of… what? Quantum mechanics applying to everything? But that’s a great deal weaker! I am not nearly as certain of that (in fact, I have no real solid belief about it), so by no means will I condition on a certainty of this claim! This part I actually just don’t get the point of. I mean, you’re not wrong, but so what? As for #5 and #6, well, there I just don’t understand what you’re saying, so I can’t judge whether it’s relevant.

My response here would be the same as my responses to the other outstanding threads.

The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you've done] approaches infinity is zero. In the grand scheme of things, it doesn't matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam's razor.

The rest of your objections, if I understand QM and its implications right, fall upon the previously unintuitive a... (read more)

3Said Achmiz
Well, in that case, I’m afraid we have indeed hit a dead end. But I will say this: if (as you seem to be saying) you are unable to treat quantum mechanics as a conceptual black box, and simply explain how its claims (those unrelated to consciousness) allow us to reason about consciousness without dissolving the Hard Problem, then… that is very, very suspicious. (The phrase “impossible to conceive except in hindsight” also raises red flags!) I hope you won’t take it personally if I view this business of “conceptual black swans” with the greatest skepticism. I will, if I can find the time, try to give the QM sequence a close re-read, however.
3Said Achmiz
This doesn’t address my objection. You are responding as if I were skeptical of assigning some particular prior, whereas in fact I was objecting to assigning any prior, or indeed any posterior—because one cannot assign a probability to a string of gibberish! Probability (in the Bayesian framework, anyway—not that any other interpretations would save us here) attaches to beliefs, but I am saying that I can’t have a belief in a statement that is incoherent. (What probability do you assign to the statement that “fish the inverted flawlessly on”? That’s nonsense, isn’t it—word salad? Can the uniform prior help you here?)

I truly do think we can't move further from this point, in this thread of this argument, without you reading and understanding the sequence :(

I could be mistaken, but it seems to me that the distinction you're trying to make between what I'm saying and what I'd have to say for my answer to be responsive dissolves as you understand QM.

I could, of course, be misunderstanding you completely. But there also isn't anything you're linking that I'm unwilling to read :P

3Said Achmiz
Well, to be honest, I don’t think there is anywhere further to move. I mean, suppose I re-read the QM sequence, and this time I understand it. What propositions will I then accept, that I currently reject? What beliefs will I hold, that I currently do not? If I’ve read your comments correctly thus far, then it seems to me that everything you list, in answer to my above questions, will be things that I have already assented to (at least, for the sake of argument, if not in fact). So what is gained?
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?

I agree. At the core, every belief is bayesian. I don't recognize a fundamental difference, just one of categorization. We carved up reality, hopefully at its joints, but we still did the carving. You seemed to be the one arguing a material difference between "has to" and "is".

As an aside, it's possible you missed my e... (read more)

4Said Achmiz
Concerning your edit—no, I really don’t think that it is of the same sort. The prediction of the Higgs Boson was based on a very specific, detailed model, whereas—to continue where the grandparent left off—what you’re asking me to do here is to assent to propositions that are not based on any kind of model, per se, but rather on something like a placeholder for a model. You’re saying: “either these things are true, or we’re wrong about reductionism”. Well, for one thing, “these things” are, as I’ve said, not even clearly coherent. It’s not entirely clear what they mean, because it’s not clear how to reason about this sort of thing, because we don’t have an actual model for how subjective phenomenal consciousness emerges from physics. And, for another thing, the dilemma is a false one—it should properly be a quatrilemma (is that a word…?), like so: “Either these things are true, or we’re wrong about reductionism, or we’re wrong about whether reductionism implies that these things are true, or these things are not so much false as ‘not even wrong’ (because there’s something we don’t currently understand, that doesn’t overturn reductionism but that renders much of our analysis here moot).” “Ah!” you might exclaim, “but we know that reductionism implies these things! That is—we’re quite certain! And it’s really very unlikely that we’re missing some key understanding, that would render moot our reasoning and our scenarios!” To that, I again say: without an actual reduction of consciousness, an actual and complete dissolution of the Hard Problem, no such certainty is possible. And so it is these latter two horns of the quatrilemma which seems to me to be at least as likely as the truth of the higher rungs of the ladder.

Right. I agree that we don't know how, but I submit that we know that they do, and we believe strongly in reductionism, and we can condition conclusions on reductionism and the belief that they do, without conditioning them on how they do, and I submit further that limiting ourselves in this way is still sufficient to advance up the ladder.

We have a black box - in computer science, an interface - but we don't need to know what's inside if we know everything about its behavior on the outside. We can still use it in an algorithm, and know wh... (read more)

3Said Achmiz
Sorry, I think I wasn’t clear. When I said: I didn’t mean this in the sense of “how is it possible that they do”, but rather in the sense of “in what way do they”. To that formulation, your answer is non-responsive. But we don’t know everything about the black box’s behavior! That’s precisely the problem in the first place! We are, in essence, trying to predict the behavior of the black box. And we’re trying to do it without knowing what’s inside it—which seems futile and ill-advised, given that we can’t exactly observe the box’s behavior, post-hoc! As for the linked Sequence post—again, I really do take your word for it. I just don’t think that stuff is relevant.

Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yetreduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And,
... (read more)
4Said Achmiz
I have no idea, and indeed am skeptical of the entire practice of assigning numerical strengths to beliefs of this nature. However, I think I am sufficiently certain of this belief to serve our needs in this context. Absolutely not, because the whole problem is that even given my assent to the proposition that consciousness is completely emergent from the physical, if I don’t know how it emerges from the physical, I am still unable to reason about the things on the higher parts of the ladder. That’s the conceptual objection, and it suffices on its own; but I also have a more technical one, which is— —the laws of conditional probability, you say? But hold on; to apply Bayes’ Rule, I have to have a prior probability for the belief in question. But how can I possibly assign a prior probability to a proposition, when I haven’t any idea what the proposition means? I can’t have a belief in any of those things you list! I don’t even know if they’re coherent! In short: my answer to the latter half of your query is “no, and in fact you’re asking a wrong question”.
3Said Achmiz
(Answering the latter half of your comment first; I’ll respond to the other half in a separate comment.) Indeed, there is a sense in which your “has to be” is of the latter type. In fact, we can go further, and observe that even the “is” (at least in this case—and probably in most cases) is also a sort of “has to be”, viz., this scenario: A: Is your husband at home? B: Yes, he is. Why, I’m looking at him right now; there he is, in the kitchen. Hi, honey! A: Now, you don’t know that your husband’s at home, do you? Couldn’t he have been replaced with an alien replicant while you were at work? Couldn’t you be hallucinating right now? B: Well… he has to be at home. I’m really quite sure that I can trust the evidence of my sense… A: But not absolutely sure, isn’t that right? B: I suppose that’s so. This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”? And that’s all true. The problem, however, comes in when we must deduce specific claims from very general beliefs—however certain the latter may be!—using a complex, high-level, abstract model. And of this I will speak in a sibling comment.

In that case, I would say you're between 3 and 4. And I can't say you're wrong about my relative certainty being unwarranted, but obviously I think you're wrong, and it's because I believe that QM leaves only enough wiggle room of uncertainty for the things we don't yet know to never actually affect the physical consequences of such a procedure (even from the inside).

This is why I think QM is necessary to advance up the ladder; it's the reason I advanced up the ladder, and it's the only experimentally true thing we have so far that permits you to advance up the ladder. Trying to go a different route would be dishonest.

6Said Achmiz
I will take your word about QM, I suppose, but I’m afraid that is scant concession. Much like I said in my other comment—how the physical consequences of this or that event translate into subjective experience is exactly what’s at issue!
Ah, these are much better descriptions now, well done!

Thanks, I sincerely appreciate your help in clarifying :)

We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical”

Can you explain why the former doesn't imply the latter? I'm under the impression it does, for any reasonable definition of "has to be", as long as what you're conditioning on (in this case reductionism) is true. I suppose I don't see your objection.

4Said Achmiz
Sure. Basically, this is the problem: Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yet reduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.) What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like: “We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.” In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question. ---------------------------------------- Tangentially: To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles. Consider: A: Is your husband at home right now? B: He has to be; he left work over two hours ago

I'm not so sure this fails. I'm inclined to take this to mean you are between 2 and 3 or between 3 and 4, depending on what specifically you object to in 3.

But I'm curious, if you had to hazard a guess in your own words as to what is most likely to be true about identity and consciousness in the case of a procedure that reproduces your physical body (including brain state) exactly - pick the hypothesis you have with the highest prior, even if it's nowhere close to 50% - what would it be, completely independent of what I've said in this ladder?

3Said Achmiz
Honestly, I have no idea. I really don’t know how to reason about subjective phenomenal consciousness. That’s the problem. It seems clear to me that anyone who, given the state of our current knowledge, is very certain of anything like the latter part of #3 or of any of the higher numbers, is simply unjustified in their certainty. If you can’t give me a satisfying reduction of consciousness—one which fully and comprehensively dissolves the Hard Problem—then nothing approaching certainty in any of these views is possible. I wholly agree with this: But everything beyond that? Everything that deals with subjective experience, anticipation, etc.? I just plain don’t know.

That's fair; I wasn't sure how to phrase the idea in 8 to exclude 9, so the language isn't perfect, and I agree, now that I've seen it, that your proposed 10 is conceptually a step above my 9. Let me know if it is okay for me to add your 10 to my list.

Out of curiosity, do you consider the 10 you wrote "intuitively true", or just the logical next step in a hypothetical ladder?

Edit: I did my best to fix 8.

2Dacyn
You can add my #10 to your list. Regarding your new #8, I'm not sure I understand the distinction between a brain implemented on a computer chip vs a simulation of a brain. Regarding my opinion on what is "intuitively true", it seems like all of them are different ways of making more precise the notion of identity, I don't know that it makes sense to give one of them a privileged status. In other words they all seem to be referring to slightly different concepts, all of which appear to be valid...
You just did, right? That’s the description, right there. (But it’s not identical with #4 as written! … was it meant to be?)

Fair, I did my best to fix 3 and 4.

And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?

Can you be more specific about what exactly I said that you're referring to? Forgive me but I actually am not sure which part you mean.

I was able to follow the QM sequence just enough to… well, not to grasp this point, precisely, but to grasp thatEliezer was claiming this. But I d
... (read more)
4Said Achmiz
Re: and Fair enough, this is a reasonable way of looking at it. So, let me go ahead and try to “rate” each of the levels in the way you’re looking for: 1. This seems like obvious nonsense. 2. This seems like slightly less obvious but still nonsense. 3. I don’t really know whether this is true [pending resolution of the Hard Problem], but it seems unlikely. 4. I don’t really know whether this is true; I don’t even know if it’s likely. Without a resolution of the Hard Problem, I can’t really reason about this. 5. Ditto #4. 6. I am very uncertain about this because I don’t understand quantum mechanics. That aside, ditto #4. 7. I have no clue whether this could be true. 8. I have no clue whether this could be true. 9. I have no clue whether this could be true. I think that fails to satisfy your desired criteria, yes? (Unless I have misunderstood?)
4Said Achmiz
Ah, these are much better descriptions now, well done! Sure. You said: [emphasis added] The bolded part is what I was referring to; I see no basis for claiming that. Why would solving the Hard Problem not be necessary for reasoning about the consequences of implementing or copying a mind? Now, realistically, what I think would happen in such a case is that either we’d have solved the Hard Problem before reaching that point (as you suggest), or we’ll simply decide to ignore it… which is not really the same thing as not needing to solve it. Yes, I understand that that’s Eliezer’s point. But it’s hardly convincing! If we haven’t solved the Hard Problem, then even if we tell ourselves “copying can’t possibly matter for identity”, we will have no idea what the heck that actually means. It doesn’t, in other words, help us understand what happens in any of the scenarios you describe—and more importantly, why. As an aside: No, we can’t assert this. We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical” is something that we’re only licensed to say after we discover how consciousness emerges from the physical. Until then, perhaps it has to be, but it’s an open question whether it is… [rest of my response is conceptually separate, so it’s in a separate comment]
Fair enough. Would you say that if the discussion reaches this part of the ladder, a digression must then be made to ensure that both parties well and truly understand QM?

I'm not sure it "must" be made, but that's exactly the route I would go at this point.

suppose one lacks the mathematical aptitude / physics background / whatever to grasp QM; is further progress impossible? In that case, what ought one’s view of this topic be?

I guess I haven't considered this. When I find myself in this position, I try to gain the requisite ... (read more)

2Said Achmiz
I have now re-read the post. I’m afraid it didn’t help. Or rather, to be more precise—I conclude from it that using the concept of “probability” in this way is incoherent. Basically, I think that Yu’el is wrong and De’da is right. (In fact, I would go further—I don’t even think that De’da’s answer concerning which way to bet makes a whole lot of sense… but this is a tangent, and one which brings in issues unrelated to “probability” pe se.)
3Said Achmiz
I beg to differ! I found the QM sequence impenetrable (and I don’t consider myself to entirely lack math aptitude). (Granted, it’s been a while since the last time I gave it a close read, and perhaps if I try again I’ll get through it, but I do not have high hopes for gaining anything like the kind of understanding it would take to base intuitions about consciousness and identity on!) That said, I think that if your approach relies on your interlocutor having a solid understanding of quantum mechanics, then… I’m afraid it’s even more flawed than I thought at first… :( I have read this post, though again, it has been a while. I will re-read it and get back to you! I, too, am a reductionist, and concur with your strong belief; unfortunately, this doesn’t actually help… it doesn’t move us any closer to a solution. And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this? I was able to follow the QM sequence just enough to… well, not to grasp this point, precisely, but to grasp that Eliezer was claiming this. But I don’t see how it entails or implies ot even suggests that a solution to the Hard Problem is unnecessary here? You just did, right? That’s the description, right there. (But it’s not identical with #4 as written! … was it meant to be?)

I have expanded on 1 in an edit. Let me know if it makes sense.

3Said Achmiz
Looks good!

This needs more elaboration, if you want to use it in the way you do. I know what you mean here (at least, I think I do), but it may not be obvious to many interlocutors—“the same thing” in what way, exactly? (Especially since this is step #1.)

I see your point. Admittedly, I built this while talking to someone who was past 1, and I consider the position a gross misunderstanding of current technology. I'll consider how to describe the proposition better, though.

Or is it your intention to show that this view is incoherent, for exactly this reason

Pr... (read more)

3Said Achmiz
I can speak for myself only, but—I’m not past #4, for exactly the reasons I outlined. (I think #4 is incoherent, for—apparently—the reason you intended it to be incoherent; but the steps above that are problematic, as I said.) Fair enough. Would you say that if the discussion reaches this part of the ladder, a digression must then be made to ensure that both parties well and truly understand QM? If so, then suppose one lacks the mathematical aptitude / physics background / whatever to grasp QM; is further progress impossible? In that case, what ought one’s view of this topic be? Sorry, I guess I was not clear… I take this answer to be just pushing the problem back one irrelevant step, since my question applies to this scenario also! Well, that’s the thing; I don’t know (after reading this) whether you’re spouting utter nonsense! That’s precisely the problem I was trying to point to: I, for example, couldn’t tell you where on the ladder I am, for the reasons I outlined in the grandparent. https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

To think he could have had twice as many readers.

The encoding scheme you're talking about is Huffman Coding, and the ambiguity you're trying to avoid explicitly occurs when one symbol is the prefix of another. The mechanism to build an optimal prefix(-free) code is called the Huffman Tree, and it uses a greedy algorithm that builds a full binary tree from the bottom up based on the frequencies of the symbols. Leaves are symbols, and the code for a symbol is the sequence of left or right branches you must traverse the reach that symbol.

To get more specific, you add all the symbols to a heap ba... (read more)

Sorry, but I'm not in the habit of taking one for the quantum superteam.

If you're not willing to "take one for the team" of superyous, I'm not sure you understand the implications of "every implementation of you is you."

And I don't think that it really helps to solve the problem;

It does solve the problem, though, because it's a consistent way to formalize the decision so that on average for things like this you are winning.

it just means that you don't necessarily care so much about winning any more. Not exactly the point.

I th... (read more)

1thrawnca
No, take another look:

So here's why I prefer 1A and 2B after doing the math, and what that math is.

1A = 24000
1B = 26206 (rounded)
2A = 8160
2B = 8910

Now, if you take (iB-iA)/iA, which represents the percent increase in the expected value of iB over iA, you get the same number, as you stated.

(iB-iA)/iA = .0919 (rounded)

This number's reciprocal represents the number of times greater the expected value of iA is than the marginal expected value of iB

iA/(iB-iA) = 10.88 (not rounded)

Now, take this number and divide it by the quantity p(iA wins)-p(iB wins). This represents how much y... (read more)

It seems to me the answer becomes more obvious when you stop imagining the counterfactual you who would have won the $10000, and start imagining the 50% of superpositions of you who are currently winning the $10000 in their respective worlds.

Every implementation of you is you, and half of them are winning $10000 as the other half lose $100. Take one for the team.

0thrawnca
Sorry, but I'm not in the habit of taking one for the quantum superteam. And I don't think that it really helps to solve the problem; it just means that you don't necessarily care so much about winning any more. Not exactly the point. Plus we are explicitly told that the coin is deterministic and comes down tails in the majority of worlds.

If you count the amount of "wanting to switch" you expect to have because the cable guy hasn't arrived yet, it should equal exactly the amount of "wishing you hadn't been wrong" you expect to have if you pick the second half because the cable guy arrived before your window started.

I'm not sure how to say this so it's more easily parseable, but this equality is exactly what conservation of expected evidence describes.