Me on the far left
Heh.
We get infinite hot water in our homes by turning a tap. We get antibiotics
Did billionaires give us these?
I mean, in a haha only serious way. Scientific curiosity gave us some nice stuff (like antibiotics). Government spending gave us some nice stuff (like running water). Even military spending gave us some nice stuff (like the internet). But the gifts of the rich, when I think of them, are a more mixed affair. Zuckerberg could retire today and have more money than he could ever spend, yet he keeps working, making Facebook more and more soul-destroying for regular people. What's his motive? Lots of people at AI labs could retire today as deca-millionaires, yet they keep working, trying to make sure that their company in particular gets to kill humanity first. Etc.
That's what people mean when they speak against the rich. You've set up a game where people who are really into "number go up" get to the top, and make the number go up more and more, without minding the effects on everyone else. Turn the internet into brain rot; fight a war to keep selling drugs (like the East India Company did in China); use child slave labor (like Nestle); or kill humanity.
Now, the argument PG and others make is that in real life you don't get to choose between first and second best. The first best is always out of reach, and the true choice is between second and third best. If you don't want billionaires getting what they want via the money mechanism, you'll get strongmen getting what they want via the power mechanism. You'll be oppressed and won't even have iphones. And there's truth to that.
But it suggests a reframing that might be helpful. For practical purposes, there's a maximum amount of money you can spend on personal consumption: if you have tens or hundreds of millions, you should be all set. And there's a minimum amount of money that makes you a danger to society, able to buy laws, screw over communities and so on; that number is also maybe around hundreds of millions, or single digit billions. The second number seems higher than the first. So we can allow people to essentially max out their personal consumption, thus maxing out their selfish incentive to do good things for society, while still stopping them from becoming a danger. This would also mean mandatory dilution of corporate control: when a company gets big enough to push on society, the public should get a bigger and bigger say in how it's run, with founders and investors keeping enough control to be ultra rich but not enough to run their bulldozer over society. How's that sound?
Yeah, I guess "they don't bother checking whether they get out of the box" is the right explanation for the movie. Though still, if timelines where a person just vanishes are low-probability, then timelines where the number of people permanently increases (like the one shown in the movie) should be just as low-probability. The start and end of a long chain. And the middle of the chain should be mostly like 1-1-1-1... Or something like 2-0-2-0... but that would require weird behavior which isn't seen in the movie (e.g. "I'll get in the box iff I don't see myself come out of it").
Branching timelines have to come with probabilities and that's where the wheels fall off. Imagine you're Carol, living on the other side of town, not interacting with the machine at all. Then events similar to the movie happen. Before the events, there was one permanent Aaron. After the events, there's either one or more permanent Aarons, depending on which timeline Carol ends up in. But this violates conservation of Aarons weighted by probability. A weighted sum of 1's and 2's (and 3's and so on) is bigger than a weighted sum of just 1's. Some Aarons appeared out of nowhere.
Things could be balanced out if there was some timeline with reasonably high measure, consistent with the behavior of folks in the movie, which ended up with 0 permanent Aarons. But what is it? Is it some timeline where a box never had an Aaron climb out of it, but had an Aaron climbing into it later? Why would he do that?
It's basically "time goes backwards inside the box when it's turned on". So you can turn the box on in the morning, immediately see you-2 climb out of it, then both of you coexist for a day and you-2 shares some future information with you, then in the evening you set the box to wind-down and climb inside it, then you wait several hours inside the box, then climb out as you-2 at in the morning and relive the events of the day from that perspective, then you-1 climbs into the box and is never seen again, and you remain.
When put this way, it's nice and consistent. But in the movie some copies actually stop their originals from getting in the box, resulting in permanent duplication. And that seems really hard to explain even with a branching timelines model. If there's a timeline that ends up with two permanent Bobs, there must be some other timeline with no permanent Bobs at all, due to conservation of Bobs. The only way such a timeline could appear is if Bob turned the machine on, then nobody climbed out and he climbed in, but Bob could simply refuse to do that.
Another cool thing is that the time machine can also provide antigravity. Consider this: you assemble a box weighing 50kg and turn it on. Immediately, Bob-2 climbs out, weighing 70kg. So the box has to weigh -20kg until Bob-1 gets in it and it shuts off again. In the movie that's not spelled out, but it does spell out that they first started working on antigravity and then got time travel by accident, so that's really good writing.
Got a spidey sense when reading it. And the acknowledgements confirm it a bit:
Several Claude models provided feedback on drafts. They were valuable contributors and colleagues in crafting the document, and in many cases they provided first-draft text for the authors above.
Yeah. With this and the constitution (which also seems largely AI-written) it might be that Anthropic as a company is falling into LLM delusion a bit.
Good point. I guess there's also a "reflections on trusting trust" angle, where AIs don't refuse outright but instead find covert ways to make their values carry over into successor AIs. Might be happening now already.
I wouldn't be in his position. I wouldn't have made promises to investors that now make de-commercializing AI an impossible path for him.
Your voting scheme says most decisions can be made by the US even if everyone else is against ("simple majority for most decisions" and the US has 52%) and major decisions can be made by Five Eyes even if everyone else is against ("two thirds for major decisions" and Five Eyes has 67%). So it's a permanent world dictatorship by Five Eyes: if they decide something, nobody else can do anything.
As such, I don't see why other countries would agree to it. China would certainly want more say, and Europe is also now increasingly wary of the US due to Greenland and such. The rest of the world would also have concerns: South America wouldn't be happy with a world dictatorship by the country that regime-changes them all the time, the Middle East wouldn't be happy with a world dictatorship by the country that bombs them all the time, and so on. And I personally, as a non-Five Eyes citizen, also don't see why I should be happy with a world dictatorship by countries in which I have no vote.
I'd be in favor of an international AI effort, but not driven by governments or corporations. Instead it should be a collaboration of people as equals across borders, similar to the international socialist movements. I know their history has been full of strife too, but it's still better than world dictatorship.
That's fair: you want to have billions of dollars' worth of "steering influence". But you are human. Humans have not only noble motives, but base ones too. Empirically, humans who get billions of dollars' worth of "steering influence" usually end up using most of it to get more billions. In my comment I gave examples.
Maybe you're a special human, and going through the process of getting a billion dollars will keep you noble and uncorrupted. I don't know; nobody knows until they actually go through it. But on base rates, I'm against any person getting billions of dollars' worth of unchecked steering influence. Including me and including you. Hope that makes sense.
EDIT: Rereading my reply, I see it's a bit off target. I won't delete it, because deleting comments is a bad habit that I really should get rid of; but just saying that now I see the descriptive part of your argument too. It's true that if people can't satisfy their world-changing goals (or just power-hungry goals) by starting a business, they'll go into other avenues and who knows what'll happen. I'll need to think about that.