Posts

Sorted by New

Comments

Sorted by
Giskard10

On the same day I posted my original comment I later realized what I said was wrong, and I'll soon edit it to reflect that.

Regarding your response: I think I have a guess on the important difference you're referring to. They both seem to be equivalent to an Incubator Sleeping Beauty, but see consideration 2 bellow.

1

I think another useful (at least to me) way of seeing/stating what is happening here is that all of the following sentences are true, in an ISB and your two experiments:

  • The probability (from an external POV) that the coin was Heads or Tails is 1/2.
  • Each individual "me" (however many there are) will experience the coin being Heads or Tails one half of the time.
  • If every "me" always predicts Heads, all of my mes will be correct 1/3 of the time and wrong 2/3 of the time. Each individual me will only be able to notice this if we get together after the experiments to compare notes.

I think this is equivalent to the difference in scoring methods you used in Anthropical Motte and Bailey in two versions of Sleeping Beauty.

2

With the two experiments in your response, the only significant difference I can see is that, in experiment 1, there are two identical copies of me, and in 2, there are two different people. I don't know if you're implying that this changes any probabilities, and I'm not sure that it does. What I can say is that experiment 2 is, AFAICT, equivalent to the Doomsday argument in it's setup: two theories on the amount of people that will come to be, with 1:1 prior odds between them, and the question is "should you update on your existing". I have more reflection to make before I can give any firm answer here, but I'm inclined toward "no".

3

I have a feeling that, even though we agree with the final probabilities, we disagree on some of the internal details of how these experiments work. What would you say is the significant difference between the experiments, and does it change the numbers?

Giskard32

But Heads outcome in Incubator Sleeping Beauty is not. You are not randomly selected among two immaterial souls to be instantiated. You are a sample of one. And as there is no random choice happening, you are not twice as likely to exist when the coin is Tails and there is no new information you get when you are created.

I am twice as likely to exist when the coin is Tails! After all, if the coin is Tails, then there are two of me. I understand how this can lead to a thirder conclusion:

  1. Heads implies one chance for me to exist.
  2. Tails implies two chances for me to exist.
  3. I observe that I exist. This is predicted "twice as much" by the coin being Tails then Heads, so the probability of Tails is 2/3.

However, this there is a mistake happening in this reasoning. The correct one is the following:

  1. Heads implies the the number of "mes" will be 1.
  2. Tails implies the number of "mes" will be 2.
  3. I observe that I exist. Does this mean that there is 1 of me, or 2 of me? I don't know.

So we can't extract information from my existence, and we're back to normalcy: 1/2 chance of Head or Tails.

[Edit] I no longer agree with the parts above that are crossed. Consider two lotteries, one awards only one person, the other awards two people. Only one of these lotteries ends up happening, and you win. It's safe to update on "I won the lottery" and get a higher degree of confidence that the lottery that happened was the one that awards two people, not one. We don't say that well, I don't know if the amount of people awarded was 1 or 2, so no evidence here".

The correct rebuttal to the thirder argument above is that the two "chances" for me to exist given Tails share the 0.5 probability that the coin is Tails, so each gets 0.25.

We can still say that "I am twice as likely to exist on Tails" if we let the words "I", and "exist" do a lot of hidden work: assuming everything goes right with the experiment, I am 100% guaranteed to exist either way.

I think I don't understand what makes you say that anthropic reasoning requires "reasoning from a perspective that is impartial to any moment". The way I think about this is the following:

  • If I imagine how an omnitemporal, omniscient being would see me, I imagine they would see me as a randomly selected sample from all humans, past present and future (which don't really exist for the being).
  • From my point of view, it does feel weird to say that "I'm a randomly selected sample", but I certainly don't feel like there is anything special about the year I was born. This, combined with the fact that I'm obviously human, is just a from-my-point-of-view way of saying the same thing. I'm a human and I have no reason to belive the year I was born is special == I'm a human whose birth year is a sample randomly taken from the population of all possible humans.

What changes when you switch perspectives is just the words, not the point. I guess you're thinking about this differently? Do you think you can state where we're disagreeing?

I don't think the Doomsday argument claims to be time-independent. It seems to me to be specifically time-dependent -- as is any update. And there's nothing inherently wrong with that: we are all trying to be the most right that we can be given the information we have access to, our point of view.

For now, I see no reason to deviate from the simple explanations to the problems OP posited.

Why am I me?

Well, "am" (an individual being someone), "I" and "me" (the self) are tricky concepts. One possible way to bypass (some of) the trickiness is to consider the alternative: "why am I not someone else"?

Well, imagine for a moment that you are someone else. Imagine that you are me. In fact, you've always been me, ever since I was born. You've never thought "huh, so this is what it feels like to be someone else". All you've ever thought is "what would it be like to be someone else?". Then one day you tried to imagine what it would be like to be the person who wrote an article on LessWrong and...

Alakazam, now you're back to being you. My point here is that the universe in which you are not you, but someone else, is exactly like our universe, in every way. Which either means that this is already the case, and you really are me, and everyone else too, or that those pesky concepts of self and identity actually don't work at all.

Regarding anthropic arguments, if I understand correctly (from both OP's post and comments), they don't believe that they are a n=1 sample randomly taken from the population of every human to ever exist. I think they are. Are they an n=1 sample of something? Unless the post was written by more than one person, then yes. Are they a sample taken from the population of all humans to ever exist? I do think OP is human, so yes. Are they a randomly selected sample? This is where it gets interesting.

If both your parents were really tall, than you weren't randomly selected from the population of all humans in regards to height. That is because even before measuring your height, we had reason to believe you would grow up to be tall. Your sampling was biased. But in regards to "when you were born", we must ask if there is any reason that we should think OP's birth rank leans one way or another. I can't think of one -- unless we start adding extra information to the argument. If you think the singularity is close and will end Humanity, then we have reason to think OP is one of the last few people to be born. If you think Humanity has a large chance of spreading through the Galaxy and living for eons, than we have reason to think the opposite. But if we want to keep our argument "clean" from outside information, then OP's (and our) birth rank should not be considered special. And it certainly wasn't deliberately selected by anyone beforehand. So yes, OP is a n=1 sample randomly taken from the population of all humans to ever exist, and therefore can do anthropic reasoning.

That doesn't necessarily mean the Doomsday argument is right though. I feel like there might me hidden oversimplifications in it, but I won't try to look for them now. The larger point is that anthropic reasoning is legitimate, if done right (like every other reasoning).

So, I'm 10 years late. Nevertheless I'm throwing my two cents into this comment, even if it's just for peace of mind.

Mostly agree with the litany, as I interpret it as saying not that "there are no negative consequences to handling the truth", but saying instead that "the negative consequences of not handling the truth are always worse than the consequences of handling it". However, upon serious inspection I also feel unsure about it, on the corner cases of truths which could have an emotional impact over people (or on me) greater than their concrete impact.

With that said, my suggestion 10 years ago would have been to include the Litany of Gendlin verbatim, accompanied by "yeah, this one might be wrong".

Performatic Rationality should make a healthy effort to ritualize the idea of questioning it's rituals. Also, it should make a healthy effort not to hide arguments that some think are wrong, but about which there isn't (approximate) unanimity yet. What better way to hit both checkboxes than literally including a famous litany you disagree with and then pointing out that it might be wrong?

In this article, you posit that "positive sum networks will out-compete [...] antisocial capitalism [...]".

If I understand correctly, this is due to cooperative systems of agents (positive-sum networks) producing more utility than purely-competitive systems. You paint a good picture of this phenomenon happening, and I think you are describing something similar to what Scott Alexander is in In Favor of Niceness, Community, and Civilization.

However, the question then becomes "what exactly makes people choose to cooperate, and when?" You cite the Prisoner's Dilemma as a situation where the outcome Cooperate/Cooperate is better than the outcome Compete/Compete for both players. That is true, but the outcome Compete/Cooperate is better for player 1 than any other. The reverse is true for player 2. That is what makes the Coop/Coop state a fragile one for agents acting under "classical rationality".

Cooperation tends to be fragile, not because it is worse then Competition (it's better in the situations we posit), but because unilaterally defying is better. So, suppose you have a group of people (thousands? billions?) who follow a norm of "always choose cooperation". This group would surely be more productive than an external group who constantly chooses to compete, sure, but if you put even one person who chooses to compete inside the "always cooperate" group, that person will likely reap enormous benefits to the detriment of others -- they will be player 1 in a Compete/Cooperate dilemma.

If we posit that the cooperating group can learn, they will learn that there is a "traitor" among them, and will become a little more likely to choose Compete instead of Cooperate when they think they might be interacting with the "traitor". But this means that these people themselves be choosing Compete, increasing the amount of "traitors" in the group, and then the whole thing deteriorates.

Do you have any ideas on how to prevent this phenomenon? Maybe the cooperating group is acting under a norm that is more complex than just "always cooperate", that allows a state of Cooperate/Cooperate to become stable?

You cite "communication and trust" as "the two pillars of positive sum economic networks". Do you think that if there is a sufficiently large amount and quality of trust and communication they become self-reinforcing? What I have described is a deterioration of trust in a group. How can this be prevented?

Huh.

I think I've gathered a different definition of the terms. From what I got, mistake theory could be boiled down to "all/all important/most of the world's problems are due to some kind of inefficiency. Somewhere out there, something is broken. That includes bad beliefs, incompetence, coordination problems, etc."

some outgroups aren't malicious and aren't so diametrically opposed to your goals that it's an intentional conflict, but they're just bad at thinking and can't be trusted to cooperate.

In what way is this different than mistake theory?

Load More