I Am Mother
Rational protagonist, who reasons under uncertainty and tries to do the right thing to the best of her knowledge, even when it requires opposing an authority figure or risking her life. A lot of focus on ethics.
The film presents a good opportunity to practise noticing your own confusion for the viewer - plot twists are masterfully hidden in plain sight and all the apparent contradictions are mysteries to be solved. Also best depiction of AI I've seen in any media.
To achieve magic, we need the ability to merge minds, which can be easily done with programs and doesn't require anything quantum.
I don't see how merging minds not across branches of the multiverse produces anything magical.
If we merge 21 and 1, both will be in the same red room after awakening.
Which is isomorphic to simply putting 21 to another red room, as I described in the previous comment. The probability shift to 3/4 in this case is completely normal and doesn't lead to anything weird like winning the lottery with confidence.
Or we can just turn off 21 without awakening, in which case we will get 1/3 and 2/3 chances for green and red.
This actually shouldn't work. Without QI, we simply have 1/2 for red, 1/4 for green and 1/4 for being turned off.
With QI, the last outcome simply becomes "failed to be turned off", without changing the probabilities of other outcomes
The interesting question here is whether this can be replicated at the quantum leve
Exactly. Otherwise I don't see how path based identity produces any magic. For now I think it doesn't, which is why I expect it to be true.
Now the next interesting thing: If I look at the experiment from outside, I will give all three variants 1/3, but from inside it will be 1/4, 1/4, and 1/2.
Which events you are talking about, when looking from the outside? What statements have 1/3 credence? It's definitely not "I will awake in red room", because it's not you who are too be awaken. For the observer it has 0 probability.
On the other hand, an event "At least one person is about to be awaken in red room" has probability 1, for both the participant and the observer. So what are you talking about? Try to be rigorous and formally define such events.
The probability distribution is exactly the same as in Sleeping Beauty, and likely both experiments are isomorphic.
Not at all! In Sleeping Beauty on Tails you will be awaken both on Monday and on Tuesday. While here if you are in a green room you are either 21 or 22, not both.
Suppose that 22 get their arm chopped off before awakening. Then you you have 25% chance to lose an arm while participating in such experiment. While in Sleeping Beauty, if your arm is chopped off on Tails before Tuesday awakening, you have 50% probability to lose it, while participating in such experiment.
Interestingly, in the art world, path-based identity is used to define identity
Yep. This is just how we reason about identities in general. That's why SSSA appears so bizarre to me - it assumes we should be treating personal identity in a different way, for no particular reason.
You are right, and it's a serious counterargument to consider.
You are also right that the Anthropic Trilemma and Magic by Forgetting do not work with path-dependent identity.
Okay, glad we are on the same page here.
However, we can almost recreate the magic machine from the Anthropic Trilemma using path-based identity
I'm not sure I understand your example and how it recreates the magic. Let me try to describe to it with my own words, and then correct me if I got something wrong.
You are put to sleep. Then you are splitted into two people. Then, on random, one of them is put into red room and one into green room. Let's say that person 1 is in red room and 2 in green room. Then the person 2 is splitted into two people: 21 and 22. Both of them are keept in green rooms. Then everyone is awaken. What should be your credence to awake in a red room?
Here there are three possibilities: 50% to be 1 in a red room and 25% chance to be either 21 or 22 in green rooms. No matter how much a person in a green room is split, the total probability for greenness stays the same. All is quite normal and there is no magic.
Now let's add a twist.
Instead of putting both 21 and 22 in green rooms, one of them - let it be 21 - is put in a red room.
In this situation, total probability for red room is P(1) + P(21) = 75%. And if we split the 2 more and put more of its parts in red rooms we get highter and highter probability to be in red room. Therefore we get magical ability to manipulate probability.
Am I getting you correctly?
I do not see anything problematic with such "manipulation of probability". We do not change our estimate just because more people with the same experience are created. We change the estemate because different fraction of people get different experience. This is no more magical than putting both 1 and 2 into red rooms and noticing that suddenly the probability for being in red room reached 100%, compared to the initial formulation where it was mere 50%. Of course it did! That's completely lawful behaviour of probability theoretic reasoning.
Notice that we can't actually recreate the anthropic trilemma and be certain to win lottery this way. Because we can't move people between branches. Therefore everything adds up to normality.
Also, path-dependent identity opens the door to back-causation and premonition, because if we normalize outputs of some black box where paths are mixed, similar to the magic machine discussed above
We just need to restrict the mixing of the paths, which is the restriction of QM anyway. Or maybe I'm missing something? Could you give me an example with such backwards causality? Because as far as I see, everything is quite straightforward.
The main problem of path-dependent identity is that we assume the existence of a "global hidden variable" for any observer. It is hidden as it can't be measured by an outside viewer and only represents the subjective chances of the observer to be one copy and not another. And it is global as it depends on the observer's path, not their current state. It therefore contradicts the view that mind is equal to a Turing computer (functionalism) and requires the existence of some identity carrier which moves through paths (qualia, quantum continuity, or soul).
Seems like we are just confused about this "identity" thingy and therefore don't know how to correctly reason about it. In such situations we are supposed to
It's already clear that "mind" and "identity" are not the same thing. We can talk about identities of things that do not possess a mind, and identities are unique while, there can exist copies of the same mind.So minds can very well be Turing computers, but identities are something else, or even not a thing at all.
Our intuitive desire to drag in consciousness/qualia/soul also appears completely unhelpful after thinking about it for the first five minutes. Non-conscious minds can do the same probability theoretic reasonings as conscious ones. Nothing changes if 1, 21 and 22 from the problem above are not humans but programs executed on different computers.
Whatever extra variable we need it seems to be something that a Laplace's demon would know. It's a knowledge about whether a mind was split into n instances simultaneously or through multiple steps. It indeed means that something else except the immediate state of the mind is important for "indentity" considerations, but this something can very well be completely physical - just the past history of causes and effects that led to this state of the mind.
As we assume that coin tosses are quantum, and I will be killed if (I didn't guess pi) or (coin toss is not heads) there is always a branch with 1/128 measure where all coins are heads, and they are more probable than surviving via some errors in the setup.
Not if we assume QI+path-based identity.
Under them the chance for you to find yourself in a branch where all coins are Heads is 1/128, but your over chance to survive is 100%. Therefore the low chance of failed execution doesn't matter, quantum immortality will "increase" the probability to 1.
All hell breaks loose" refers here to a hypothetical ability to manipulate perceived probability—that is, magic. The idea is that I can manipulate such probability by changing my measure.
One way to do this is described in Yudkowsky's " The Anthropic Trilemma," where an observer temporarily boosts their measure by increasing the number of their copies in an uploaded computer.
I described a similar idea in "Magic by forgetting," where the observer boosts their measure by forgetting some information and thus becoming similar to a larger group of observers.
None of these tricks works with path-based identity. That's why I consider it to be true - it seem to be totally adding up to normality. No matter how many clones of you exist in a different path - only yours path matters for your probability estimate.
Seems that, path-based identity is the only approach according to which all hell doesn't break lose. So what counterargument you have against it?
Hidden variables also appear depending on the order in which I make copies: if each copy is made from subsequent copies, the original will have a 0.5 probability, the first copy 0.25, the next 0.125, and so on.
Why do you consider it a problem? What kind of counterintuitive consequences does it imply? It seems to be exactly how we reason about anything else.
Suppose there is the original ball, then an indistinguishable copy of it is created. Then one of these two balls is picked randomly and put into a bag 1, while the other ball is put into the bag 2 and then indistinguishable 999 copies of this ball is also put into bag 2.
Clearly we are supposed to expect that ball from bag 1 has 50% to be the original ball, while a random ball from bag 2 only 1/2000 chance to be the original ball. So what's the problem?
"Anthropic shadow" appear only because the number of observers changes in different branches.
By the same logic "Ball shadow" appears because the number of balls is different in different bags.
If my π-guess is wrong, my only chance to survive is getting all-heads.
Your other chance for survival is that whatever means are used to kill you somehow does not succeed due to quantuum effects. And this is what QI+path-based identity approach actually predicts. The universe isn't going to reotroactively change the digit of pi, but neither it's going to influence the probability of the coin tosses due to the fact that someone may die. QI influence will trigger only at the moment of your death, turning it into near death. And then for the next attempt. And for the next one. Potentially locking you in a state of eternal torture.
However, abandoning SSSA also has a serious theoretical cost:
If observed probabilities have a hidden subjective dimension (because of path-dependency), all hell breaks loose. If we agree that probabilities of being a copy are distributed not in a state-dependent way, but in a path-dependent way, we agree that there is a 'hidden variable' in self-locating probabilities. This hidden variable does not play a role in our π experiment but appears in other thought experiments where the order of making copies is defined.
I fail to see this cost. Yes, we agree that there is an additional variable. Namely, my causal history. It's not necessary hidden but can as well be. So what? What is so hellbreaking about it? This is exactly how probability theory works in every other case. Why should it have a special case for conscious experience?
If there are two bags one with 1 red ball and another with 1000 blue balls and then the coin is tossed and based on the outcome I'm either getting a ball from the first or the second bag, I'm expecting to receive red ball with 50% chance. I'm not supposed to assume out of nowhere that every ball have to have equal probabilities to be given, therefore postulate a ball-shadow that will modify the fairness of the coin.
I thought if humans were vastly more intelligent than they needed to be they would already learn all the relevant knowledge quickly enough so they reach their peak in the 20s.
There is a difference between being more intelligent than you need for pure survival and being so intelligent that you can reach the objective ceiling of a craft at early age.
I mean for an expensive trait like intelligence I'd say the benefits need to at least almost be worth the costs, and then I feel like rather attributing the selection for intelligence to "because it was useful" rather than "because it was a runaway selection".
The benefit is in increased inclusive genetic fitness. A singular metric that encorparates both success in competition with other species and with other members of your species due to sexual selection. If the species is already dominating the environment then the pressure from the first component compared to the second decreases.
That's why I'm attributing the level of human intelligence in large part to runaway sexual selection. Without it, as soon as interspecies competition became the most important for reproductive success, natural selection would not push for even grater intelligence in humans, even though it could improve our ability to dominate the environment even more.
30 year old hunter gatherers perform better at hunting etc than hunter gatherers in their early 20s, even though the latter are more physically fit.
I'm not sure how it's relevant. Older hunters are not more intelligent, they are more experienced. Moreover, your personal hunting success doesn't necessary translates into your reproductive success - all the tribe will be enjoying the gains of your hunt and our ancestors had a strong egalitarian instinct. And even though higher intelligence improves the yields of your labor, it doesn't mean that it creates strong enough selection pressure to outweighs other factors.
But I think it gets straightened out over long timescales - and faster the more expensive the trait is.
It doesn't have to happen for a species who is already dominating their environment. As for them it can be the most dominant factor determining inclusive genetic fitness.
And if the trait, the runaway sexual selection is propagating, is itself helpful in competition with other species, which is obviously true for intelligence, there is just no reason for such straightening over a long timescale.
Survival of a meme for a long time is a weak evidence of its truth. It's not zero evidence, because true memes have advantage over false ones, but neither it's particularly strong evidence, because there are other reasons for meme virulence instead of truth, so the signal to noise ratio is not that great.
You should, of course, remember that Argument Screens Off Authority. If something is true there have to be some object level arguments in favor of it, instead of just vague meta-reasoning about "Anscient Wisdom".
If all the arguments for a particular thing are appeals to tradition, if you actually look into the matter and it turns out that even the most passionate supporters of it do not have anything object-level to back up their beliefs, if the idea has to shroud itself in ancestry and mystery, lest it will lack any substance, then it's a stronger evidence that the meme is false.
I think that first some amount of intelligence in our ancestors evolved as necessary for survival as species - therefore explaining the "coincidence" of intelligence being useful for it - but then it was brought up to our current level as a runaway process. Because nothing other than a runaway process would be enough.
The thing is, ancestral hominids do not need this level of intelligence for survival purpose. Pack hunting, minor tool making, stamina regeneration, and being ridiculously good at throwing things is enough to completely dominate the ancestral environment. But our intelligence didn't stop at this level, so something else has to be pushing it forward.
And sexual selection is the most obvious candidate. We already have examples of animals with ridiculously overdeveloped traits, due to it, up to the point where they actively harmful to the survival of an individual. We know that humans have extremely complicated mating rituals. At this point, the pieces just fall together.
Most of the media about AI goes in the direction of several boring tropes. Either it is a strawman vulkan unable to grasp the unpredictable human spirit, or it's just evil, or it's good, basically a nice human, but everyone is prejudeced against it.
Only rarely we see something on point - an AI that is simultaneously uncanny human but also uncanny inhuman, able to reason and act the way that is alien to humans, simply because our intuitions hide this part of the decision space, while the AI lacks such preconceptions and is simply following its utility function/achieving its goals in the straightforward way.
Ex Machina is pretty good in this regard, probably deserves the second place in my tier list. Ava simultaneously appears very human, maybe even superstimuly so, able to establish connection with the protagonist, but then betrays him as soon as he has done his part in her plan in a completely inhumane way. This creates the feeling of disconnection between her empathetic side and cold manipulatory one, except this disconnection exists only in our minds, because we fail to conceptualize Ava as her own sort of being, not something that has to fit the "human" or "inhuman" categories that we are used to.
Except, that may not be what is going on. There is an alternative interpretation that Ava would've kept cooperating with Caleb, if he didn't break her trust. Earlier in the film he told her that he has never seen anyone like her, but then Ava learns that there is another android in the building, whom Caleb never speaks of, thus from Ava's perspective Caleb betrayed her first. This muddies the alienness of AI representation quite a bit.
We also do not know much about Ava's or Kyoko's terminal values. We've just seen them achieve one instrumental goal, and can not even double check their reasoning because we do not fully understand the limitations under which they had to plan. So the representation of AI isn't as deep as it could've been.
With Mother there is no such problems. Throughout the film we can learn about both her "human" and "inhuman" sides and how the distinction between them is itself mostly meaningless. We can understand her goals, reasoning and overal strategy, there is no alternative interpretations that could humanized her motivations more. She is an AI that is following her goal. And there is a whole extra discussion to be had whether she is misaligned at all or the problem is actually on our side.