There are several different aspects to this that I have different attitudes towards.
The multi-agent theory of consciousness is plausible. In fact, it is almost tautological. Any complex object can be considered "multi-agent". "Agent" is not necessarily identical to "consciousness". Otherwise, you know, you get homunculus recursion.
But there is another side to the issue.
The idea "You should force your brain to create new characters. You should mentally talk to these new characters. This will help you solve your psychological problems."
There are not really many logical connections between the first and second.
People do often feel better doing this. But people also feel good when they read sci-fi and fantasy. People also feel good when they smoke weed.
Personally, this approach stimulates paranoia in me.
It seems to me that the modern intellectual part of humanity is abnormally keen on potentially dangerous psychological practices. Such as meditation, lucid dreaming, spiritualistic sessions, channeling, Hellinger constellations.
What is the danger of these practices? Well, I have no serious proven grounds for this. Just a hypothetical wave of the hands in the air: "If there really existed some sinister non-human forces that have always secretly manipulated humanity through a system of secret signs, earlier through religions and prophetic dreams, later through abductions and channeling, then the modern fascination of intellectuals with certain things would fit perfectly into this conspiracy scenario."
Good evening. Sorry to bring up this old thread. Your discussion was very interesting. Specifically regarding this comment, one thing confuses me. Isn't "the memory of an omniscient God" in this thought experiment the same as "the set of all existing objects in all existing worlds"? If your reasoning about the set paradox proves that "the memory of an omniscient God" cannot exist, doesn't that prove that "an infinite universe" cannot exist either? Or is there a difference between the two? (Incidentally, I would like to point out that the universe and even the multiverse can be finite. Then an omniscient monotheistic God would not necessarily have infinite complexity. But for some reason many people forget this.)
Pascal's Mugging.
The problem is that the probability "if I don't pay this person five dollars, there will be a zillion sufferings in the world" existed before this person told you about it.
This probability has always existed.
Just as the probability "if I pay this person five dollars, there will be a zillion sufferings in the world" has always existed.
Just as the probability "if I raise my right hand, the universe will disappear" has always existed.
Just as the probability "if I don't raise my right hand, the universe will disappear" has always existed.
You can justify absolutely any action in this way.
This is how obsessive-compulsive disorders work.
What equally strongly supports any strategy actually supports no strategy.
These probabilities cancel each other out. And the fact that we know the possible pragmatic reason for the words of the person who asks us for five dollars makes the probability of his words being true lower than the opposite probability.
This is a logical vicious circle. Morality itself is the handmaiden of humans (and similar creatures in fantasy and SF). Morality has value only insofar as we find it important to care about human and quasi-human interests. This does not answer the question "Why do we care about human and quasi-human interests?"
One could try to find an answer in the prisoner's dilemma. In the logic of Kant's categorical imperative. Cooperation of rational agents and the like. Then I should sympathize with any system that cares about my interests, even if that system is otherwise like the Paperclipmaker and completely devoid of "unproductive" self-reflection. Great. There is some cynical common sense in this, but I feel a little disappointed.
The holy problem of qualia may actually be close to the question at hand here.
What do you mean when you ask yourself: "Does my neighbor have qualia?"
Do you mean: "Does my neighbor have the same experiences?" No. You know for sure that the answer is "No." Your brains and minds are not connected. What's going on in your neighbor's head will never be your experiences. It doesn't matter whether it's (ontologically) magical blue fire or complex neural squiggles. Your experiences and your neighbor's brain processes are different things anyway.
What do you mean when you ask yourself: "Are my neighbor's brain processes similar to my experiences?" What degree of similarity or resemblance do you mean?
Some people think that this is purely a value question. It is an arbitrary decision by a piece of the Universe about which other pieces of the Universe it will empathize with.
Yes, some people try to solve this question through Advaita. One can try to view the Universe as a single mind suffering from dissociative disorder. I know that if my brain and my neighbor's brain are connected in a certain way, then I will feel his suffering as my suffering. But I also know that if my brain and an atomic bomb are connected in a certain way, then I will feel the thermonuclear explosion as an orgasm. Should I empathize with atomic bombs?
We can try to look at the problem a little differently. The main difference between my sensation of pain and my neighbor's sensation of pain is the individual neural encoding. But I do not sense the neural encoding of my sensations. Or I do not sense that I sense it. If you make a million copies of me, whose memories and sensations are translated into different neural encodings (while maintaining informational identity), then none of them will be able to say with certainty what neural encoding it currently has. Perhaps, when analyzing the question "what is suffering", we should discard the aspect of individual neural encoding. That is, suffering is any material process that would become suffering for me if it were translated into my neural encoding within the framework of certain translation technologies.
But the devil is in the details. Again, "certain translation technologies" could make me perceive the explosion of an atomic bomb as an orgasm. On the other hand, an atomic bomb is something that I could not be, even hypothetically (unlike the thought experiment with a million copies). But, if you look from a third side, then I can’t be my neighbor either (we have different memories).
This is a very difficult and subtle question indeed. I do not want to appear to be an advocate of egoism and loneliness (I have personal reasons not to be). But this, in my opinion, is an aspect of the question that cannot be ignored.
The results of these tests have a much simpler explanation. Let's say we played a prank on all of humanity. We slipped each person a jar of caustic bitter quinine under the guise of delicious squash caviar. A week later, we conduct a mass social survey: "How much do such pranks irritate you?" It is natural to expect that the people who tend to eat any food quickly, without immediately paying attention to its smell and taste, will show the strongest hatred for such things. This will not mean that they are quinine lovers. But it will mean that they mistakenly managed to eat some quinine before their body detected the substitution. Therefore, they became especially angry and became "quininephobes".
You seem like a very honest and friendly person, as do most of the people in this thread. I would just say, "What difference does it make whether it's a bug or a feature? Maybe the admins themselves haven't agreed on this. Maybe some admins think it's a bug, and some admins think it's a feature. It's a gray area. But in any case, I'd rather not draw the admins' attention to what's going on, because then their opinion might be determined in a way that's not favorable to us. We're not breaking any rules while this is a gray area. But our actions will become a violation of the rules if the gray area is no longer a gray area."
I have the opposite opinion regarding human motives. Ten years ago, I was thinking about this while corresponding with an acquaintance. I came to this conclusion: "Maybe what she tells me about her relationship with her boyfriend is a false representation. But what I say and think can also be a false representation. If we are all false representations, then what is truth and what is false? It would be better for me, as a liar and as an incarnate lie, to support my own kind. At least it would be an act of solidarity."
The Hume's quote (or rather the way you use it) has nothing to do with models of reality. Your post is not about the things Scott was talking about from the very beginning.
Suppose I say "Sirius is a quasar." I am relying on the generally accepted meaning of the word "quasar." My words suggest that the interlocutor change the model of reality. My words are a hypothesis. You can accept this hypothesis or reject it.
Suppose the interlocutor says "Sirius cannot be considered a quasar because it would have very bad social consequences." Perhaps he is making a mistake. For the reasons you described in your text. (To be honest, I am not sure that this is a mistake. But I realize that I am writing this text on a resource for noble crazy idealists, so I will not delve deeply into this issue. Let's assume that this is indeed a mistake.)
Suppose I say "Let's consider stars like Sirius to be quasars." Is this sentence similar to the previous one? No. I am not suggesting that the other person change their model of reality. My words are not a hypothesis. They are just a project. They are just a proposal for certain actions.
Suppose the other person says "If we use the word 'quasar' in this way, it will have very bad social consequences." Is his logic sound? In my opinion, yes. My proposal does not suggest that anyone change their model of reality. It is a proposal for a specific practical action. It is as if I suggested: "Let's sing the National Anthem while walking." If the other person says: "If you sing the National Anthem while walking, it will lead to terrible consequences" (and if he can prove it), is he wrong?
Sorry for the possible broken language.
I write through a online-translator.
The described world causes mixed impressions. The ability to get rid of the unsolicited influence of time is very valuable. But at the same time, there is an aspect of deceptiveness here. When reading, I felt the bitter laughter of a religious fundamentalist inside. You know, there are people who constantly accuse the modern Western technocratic civilization of hypocritical infantilism and of trying to forget about the existence of death.
"These naive hedonists try to forget about the Grim Reaper. But they didn't really beat him. If you get into a car accident and die of wounds, then at that moment you will know that your current consciousness will disappear forever. Then another person wakes up in the hospital who does not remember the current moment. And there will continue to be many such deaths. "
That's the trouble. Redaction Machines do not destroy death and suffering. It just becomes invisible. There's a catch here. These machines gave humanity the dangerous illusion of immortality. As a result, humanity has even ceased to develop normal gerontology. The heroine of the story moves to an increasingly distant future, but medicine seems to be almost not developing. Naturally, why should people develop it? After all, all responsible decisions are made by people whose memory does not preserve death and suffering. Humanity is essentially divided into two factions - 1) those who think that everything is fine; 2) those who suffer and die, but their memories will disappear, so these versions of consciousness will not be able to influence the policy of distributing the state budget. It's like in the movie "The Prestige," where the decision to "Repeat the trick with drowning?" each time was made only by the surviving copy.
It seems to me that this is an attempt to sit on two chairs at once.
On the one hand, you assume that there are some discrete moments of our experience. But what could such a moment be equal to? It is unlikely to be equal to Planck's time. This means that you assume that different chronoquanta of the brain's existence are connected into one "moment of experience". You postulate the existence of "granules of qualia" that have internal integrity and temporal extension.
On the other hand, you assume that these "granules of qualia" are separated from each other and are not connected into a single whole.
Why?
The first and second are weakly connected to each other.
If you believe that there is a mysterious "temporal mental glue" that connects Planck's moments of the brain's existence into "granules of qualia" the length of a split second, then it is logical to assume that the "granules of qualia" in turn are connected by this glue into a single stream of consciousness.
No?
Sorry, I feel a little like a bitter cynic and a religious fundamentalist. It seems to me that behind this kind of reasoning there often lies an unconscious desire to maintain faith in the possibility of "mind uploading" or similar operations. If our life is not a single stream, then mind uploading would be much easier to implement. That is why many modern intellectuals prefer such theories.
You can say that the connection of "granules of qualia" into a single stream of the observer's existence does not make evolutionary sense. This is true. But the connection of individual Planck moments of the brain's existence into "granules of qualia" also does not make evolutionary sense. If you assume that the first is somehow an illusion, then you can assume the same about the second.