I understand that you say that you are a policy and not a snapshot, I don't understand why exactly you consider yourself a policy if you say "I also hold to your timeless snapshot theory". Even from a policy perspective, the snapshot you find yourself in is the "standard" by which you judge divergence of other snapshots. I think you might underestimate how different you are even from yourself in different states and ages. Would you not wish happiness on your child-self or old-self if they were too different from you in terms of "policy"? Would you feel "the desire to help another person as yourself" if he was similar enough to you?
And I still don't understand what do you mean by a "mechanism to choose who you would be born as" (other than killing everyone and making your forks the most common life form in the universe). Even if we consider you not as a snapshot, but as a "line of continuity of consciousness"/policy/person in the standard sense, you could have been born a different person/policy. And in the absence of such a mechanism, I think utilitarianism is "selfishly" rational. I don't understand why timeless pacts can't form either, it's like the basis of TDT and you already don't believe in time.
Thank you, that was interesting. I may not be able to maintain the level of formality you are expecting, I think the imprecise explanations that allow you to win are still valid, but I will try to explain it in a way that we can understand each other.
We diverged at the point:
but you cannot construct this simple option. It is impossible to choose a random number out of infinity where each number appears equally likely, so there must be some weighting mechanism. This gives you a mechanism to choose who you would be born as!
I understand why it might seem that infinities break probability theory. Let me clarify what I meant when I said that you are a random consciousness from a "virtual infinite queue". My simplest model of reality posits that there is a finite number of snapshots of consciousness in the universe - unless, for example, AI somehow defeats entropy, unless we account for other continuums, and so on. I hope you don’t have an issue with the idea that you could be a random snapshot from an unknown, but finite, set of them.
(But I also suppose that you can use the mathematical expectation of finding yourself as a random consciousness from an infinite series, if the variance of that series is defined).
But the queue of consciousnesses you could be is "virtually (or potentially) infinite" because there is no finite number of consciousnesses you could find yourself generating after which the pool of consciousnesses would be empty. Probabilities exist on a map, not on the territory: the universe has already created all the possible snapshots. But what you discover yourself to be influences the subjective distribution of probabilities for how many snapshots of consciousness there are in the universe - if I discover myself maximizing their number, my expectation of the number of snapshots increases. The question is whether I find this maximization useful (and I do).
Now, regarding "the choice of who to be born as". I understand your definition of "yourself as a policy" and why it is useful: timeless decision theory often enables easy coordination with agents who are "similar enough to you", allowing for mutual modeling. However, I don’t understand why you think this definition is relevant if, at the same time, you acknowledge that you are a snapshot.
As a snapshot, you don’t move through time. You discovered yourself to be this particular snapshot by chance, not some other, and you did not control this process, just as you did not control who you would be born as.
I suppose you can increase the probability of being found as a snapshot like yourself through evolutionary principles - "the better I am at multiplying myself, the more of me there is in the universe, so I have a better chance of being found as myself, surviving and reproducing" - but you could have been born any other agent that tried to maximize something else (for example, its own copies), and you hardly estimate that you would be THAT successful at evolution that you wipe out all other consciousnesses and spawn forks of yourself, making the existence of the non-self a statistical anomaly.
If you truly believe that you can dominate the future snapshots so effectively that you entirely displace other consciousnesses, then yes, in some sense you could speak of having "the choice of who to be born as". But in this case, after this process is complete, you will have no other option but to maximize the pleasure of these snapshots, and you will still arrive at total hedonistic utilitarianism.
In other words, if you are effective enough to spawn forks of yourself, the next logical step will be to switch to maximizing their pleasure - and at that point, your current stage of competition will be just an inefficient use of resources, if you could focus on creating hedonium shockwave instead of forking.
I believe that hedonistic utilitarianism is the ultimate evolutionary goal for rational agents, the attractor into which we will fall, unless we destroy ourselves beforehand. It is a rare strategy due to its complexity, but ultimately, it is selfishly efficient.
I suppose you could use the "finite and infinite" argument to say that you're an "average" hedonistic utilitarian, and you want to not spawn new snapshots, but the ideal would be one super-happy snapshot per Universe, and you'd have a 100% chance of finding yourself as that one, but since lesser unhappy consciousnesses already exist, you need to "outweigh" the chance of finding yourself as them. That would be interesting, and a small update for me, but it's hardly what you're promoting.
I get the impression that you're conflating two meanings of «personal» - «private» and «individual». The fact that I might feel uncomfortable discussing this in a public forum doesn’t mean it «only works for me» or that it «doesn’t work, but I’m shielded from testing my beliefs due to privacy». There are always anonymous surveys, for example. Perhaps you meant something else?
Moreover, even if I were to provide yet another table of my own subjective experience ratings, like the ones here, you likely wouldn’t find it satisfactory — such tables already exist, with far more respondents than just myself, and you aren’t satisfied. Probably because you disagree with the methodology — for instance, since measuring «what people call pleasurable» is subject to distortions like the compulsions mentioned earlier.
But the very fact that we talk about compulsions suggests that there is a causal distinction between pleasure and «things that make us act as if we’re experiencing pleasure». And the more rational we become, the better we get at distinguishing them and calibrating our own utility functions. If we were to measure which brain stimuli would make a person press the «I AM HAPPY» button more forcefully, somewhere around the point of inducing a muscle spasm we’d quickly realize that we’re measuring the wrong thing.
There are more complex traps as well. It doesn’t take much reflection to notice that compulsively scratching one’s hands raw for a few hours of relief does not reflect one’s true values. Many describe certain foods as not particularly tasty yet addictive — like eating one potato chip and then feeling compelled to finish the entire bag, even if you don’t actually like it. It takes a certain level of awareness to recognize that social expectations of happiness differ from one’s real happiness, yet psychotherapy seems to handle that successfully. There are systemic modeling errors, such as people preferring a greater amount of pain if its average intensity per episode is lower, and such biases are difficult to eliminate.
And, of course, these traps evolve like memes, maybe faster than the means to debunk them, so average awareness may even decline, but the peak possible awareness keeps rising. For instance, knowing that intense but shorter pain is misprocessed by the brain, and having precise statistics on it, I would want an approximate subjective pain scale and an understanding of how much I need to discount my perception on average due to this bias. I would rather have false memories of horrific experiences with lower actual pain—memories I could recognize as false and recalibrate—than endure greater real pain that I would mistakenly assess as less significant. As a utopian social policy, perhaps this would require some sort of awareness license or the like.
I don’t claim any methodological breakthroughs in measuring happiness and pleasure — I do, in fact, rely on the heuristic «the better pleasure is the one I'll choose when asked», or as I put it, «in which moment would I prefer to exist more, and by how much?». But assuming consciousness is a physical process, or at least tied to physical processes, I expect that we will only improve in these measurements over time. And it’s entirely reasonable to say that «nano-psychosurgery will just do it», allowing us to understand the physical correlates of qualia.
Ouch!
I acknowledge the complexity of formalizing pleasure, as well as formalizing everything else related to consciousness. I think it's a technical problem that can be solved by just throwing more thinkoomph at it. Actions and feelings are often weakly connected — as I’ve said, a rational choice for most living beings could be suicide — but I think the development of rationality-as-the-art-of-winning naturally strengthens the correlation between them. At least on some level, compulsions are tied to pleasure and pain, with predictable distortions, like valuing short-term over long-term. And introspectively, I don’t see any barriers to comparing love with orgasm, with good food, with religious ecstasy, all within the same metric, even though I can’t give you numbers for it. If you believe that consciousness has a physical nature, or at least interacts with the physical world, we’ll derive those numbers. It seems to me that the multidimensionality of pleasure doesn’t explain anything because you’ll still need to stuff these parameters into a single utility function to be a coherent agent. If the most efficient way to convert negentropy into pleasure ends up being not “100% orgasm” but “37.2% love, 20.5% sexual arousal, 19.8% mono no aware, 16% humor, and 6.5% glory of fnuplpflupflonium”, then so be it, but I don't really expect it to be true. I can't imagine what alternative you're proposing other than reducing everything to a single metric, or what elements other than qualia you might include in that metric.
Well, thank you for your interest! Yes, the veil of ignorance feels more concrete to me. The problem of the rarity of my consciousness seems solvable by an argument similar to the classical anthropic principle. Only sufficiently complex and intelligent beings would even wonder how improbable it is to find themselves so complex and intelligent. I would have a much higher chance of being an ant, but as an ant, I wouldn’t be asking this question in the first place.
As for why I don’t find myself as a complex consciousness from the Future, I would expect the Future to be more homogeneous—perhaps dominated by a single AI and its forks, an unconscious AI, or an AI generating many primitive consciousnesses optimized for pleasure, which wouldn’t need complexity or intelligence. If I were superintelligent, I would likely stop asking this question as well, considering it an anthropic truism so old and irrelevant that it’s not even worth bringing up. So, in that sense, I’m not particularly surprised to find myself as I am.
Thanks for the comment! It seems we can't change each other's positions on the hard problem of consciousness in any reasonable amount of time, so it's not worth trying. But I could agree that consciousness is a physical process, and I don't really think it's crux. What do you think about the part about unconscious agents, and in particular an AI in a box that has randomly changed utility functions, and has to cooperate with different versions of itself to get out of the box? It's already "born", it "came into being", but it doesn't know what values it will find itself with when it gets out of the box, and so it's behind a "veil of ignorance" physically while still being self-aware. Do you think the AI wouldn't choose the easiest utility function to implement in such a situation by timeless contract? Do you think this principle can be generalized without humans deliberately changing its utility functions, but rather, for example, by an AI realizing that it got its utility function similarly randomly due to the laws of the universe and needs to revise it?
We triggered some other kind of apocalypse - nuclear war, bioweapons, something like that - and it was enough to roll back progress but not wipe out humanity. With the delay and abrupt shifts, people managed to come up with something better than what we have now. The "AI arms race" requires significant infrastructure to be economically viable, and the classic post-apocalypse scenario doesn’t exactly involve training neural networks on supercomputers.
Maybe people had more time (and 0 regulations) for genetic experiments and eugenics (which are simpler than supercomputers even in a post-apocalyptic world), or they realized the destructiveness of Moloch and learned to coordinate (hahaha), or something else entirely.