Branching timelines have to come with probabilities and that's where the wheels fall off. Imagine you're Carol, living on the other side of town, not interacting with the machine at all. Then events similar to the movie happen. Before the events, there was one permanent Aaron. After the events, there's either one or more permanent Aarons, depending on which timeline Carol ends up in. But this violates conservation of Aarons weighted by probability. A weighted sum of 1's and 2's (and 3's and so on) is bigger than a weighted sum of just 1's. Some Aarons appeared out of nowhere.
Things could be balanced out if there was some timeline with reasonably high measure, consistent with the behavior of folks in the movie, which ended up with 0 permanent Aarons. But what is it? Is it some timeline where a box never had an Aaron climb out of it, but had an Aaron climbing into it later? Why would he do that?
It's basically "time goes backwards inside the box when it's turned on". So you can turn the box on in the morning, immediately see you-2 climb out of it, then both of you coexist for a day and you-2 shares some future information with you, then in the evening you set the box to wind-down and climb inside it, then you wait several hours inside the box, then climb out as you-2 at in the morning and relive the events of the day from that perspective, then you-1 climbs into the box and is never seen again, and you remain.
When put this way, it's nice and consistent. But in the movie some copies actually stop their originals from getting in the box, resulting in permanent duplication. And that seems really hard to explain even with a branching timelines model. If there's a timeline that ends up with two permanent Bobs, there must be some other timeline with no permanent Bobs at all, due to conservation of Bobs. The only way such a timeline could appear is if Bob turned the machine on, then nobody climbed out and he climbed in, but Bob could simply refuse to do that.
Another cool thing is that the time machine can also provide antigravity. Consider this: you assemble a box weighing 50kg and turn it on. Immediately, Bob-2 climbs out, weighing 70kg. So the box has to weigh -20kg until Bob-1 gets in it and it shuts off again. In the movie that's not spelled out, but it does spell out that they first started working on antigravity and then got time travel by accident, so that's really good writing.
Got a spidey sense when reading it. And the acknowledgements confirm it a bit:
Several Claude models provided feedback on drafts. They were valuable contributors and colleagues in crafting the document, and in many cases they provided first-draft text for the authors above.
Yeah. With this and the constitution (which also seems largely AI-written) it might be that Anthropic as a company is falling into LLM delusion a bit.
Good point. I guess there's also a "reflections on trusting trust" angle, where AIs don't refuse outright but instead find covert ways to make their values carry over into successor AIs. Might be happening now already.
I wouldn't be in his position. I wouldn't have made promises to investors that now make de-commercializing AI an impossible path for him.
Your voting scheme says most decisions can be made by the US even if everyone else is against ("simple majority for most decisions" and the US has 52%) and major decisions can be made by Five Eyes even if everyone else is against ("two thirds for major decisions" and Five Eyes has 67%). So it's a permanent world dictatorship by Five Eyes: if they decide something, nobody else can do anything.
As such, I don't see why other countries would agree to it. China would certainly want more say, and Europe is also now increasingly wary of the US due to Greenland and such. The rest of the world would also have concerns: South America wouldn't be happy with a world dictatorship by the country that regime-changes them all the time, the Middle East wouldn't be happy with a world dictatorship by the country that bombs them all the time, and so on. And I personally, as a non-Five Eyes citizen, also don't see why I should be happy with a world dictatorship by countries in which I have no vote.
I'd be in favor of an international AI effort, but not driven by governments or corporations. Instead it should be a collaboration of people as equals across borders, similar to the international socialist movements. I know their history has been full of strife too, but it's still better than world dictatorship.
Still, this is very far from the vision in the essay, which is "AI should be run by for-profit megacorps like mine and I can't even imagine questioning that".
No, and even if the US was in better shape, I wouldn't want one country to control AI. Ideally I'd want ownership and control of AI to be spread among all people everywhere, somehow.
Yeah, I guess "they don't bother checking whether they get out of the box" is the right explanation for the movie. Though still, if timelines where a person just vanishes are low-probability, then timelines where the number of people permanently increases (like the one shown in the movie) should be just as low-probability. The start and end of a long chain. And the middle of the chain should be mostly like 1-1-1-1... Or something like 2-0-2-0... but that would require weird behavior which isn't seen in the movie (e.g. "I'll get in the box iff I don't see myself come out of it").