Present value: https://www.investopedia.com/terms/p/presentvalue.asp

If the question is a confusion, I'd also like to know why.

I have a sufficiently related question that I'll just post it here:

  • If everyone knew Alphabet would create a fast take-off friendly superintelligence in 2120, except (or including, depending on your values) it would be a libertarian world (meaning the wealth would be redistributed to the shareholders), then how much would Alphabet be worth? (related: After which year does it stop being worth some relevant threshold?)

(Of course, this question is just an approximation of a potential future, but I still think it's a valuable one.)

My first stab at the question of Alphabet:

  • 1.07 increase per year ^ 100 years = ~10^3 multiples
  • value of the the world = ~10^13 USD
  • value of the Sun: 10^25 operations = ~10^25 human lives

Even with a 1000 discount, and assuming a human-life equivalent of operation is worth 1 USD, the present value would still be 10^22 USD, much more than world total asset value. This means that even with a 1 in a 10^12 chance of this happening, opportunity cost aside, all assets should be invested there. The opportunity cost just/mostly means other investments might have a higher probability (like other AI companies). Why aren't AI companies worth massively more?

Related post: Why don't singularitarians bet on the creation of AGI by buying stocks?

Related: I saw an article (I think on https://aiimpacts.org/, but I can't find it back) that was proposing economic growth might never be as fast as it was between 1990 and 2010, while still offering exponential growth. Do you have a link to that article?

Motivations for asking:

  • Better understanding how economics work
  • Maybe invest in AI companies
New Answer
New Comment
1 comment, sorted by Click to highlight new comments since:

How much the AI benifits you would depend on all sorts of features of your utility function. If you only care about yourself, and you are going to be dead by then, you don't care. If you are utilitarian, but think that the other shareholders are also utilitarian, you don't care. If you think that the amount of resources needed to give you a maximally pleasant life is small, you are selfish, and a few shareholders will give 1% of their resources to the rest of humanity, you don't care. You should only be buying stocks if you want something that other shareholders don't want and that takes a LOT of resources.

However, I think that you would need a bizzare sequence of events to get a shareholder value maximiser. In order to be able to make something like that, you need to have solved basically all of friendly AI theory.

Side note: "Shareholder value maximizers" that are reinforcement learning agents trained on the company's stock market data are easier to make, and they will dismantle the actual shareholders to convert their atoms into cash (or computers doing electronic fund transfers). This is something entirely different.

The AI's I am talking about are fully friendly CEV maximizers, just set on the list of shareholders, not the list of all humans. Anyone who can make such a thing can easily set the AI to maximize the CEV of just themselves, or all humanity. (Both easier groups to define than "shareholders") A moral programmer that didn't want to sieze the reigns of destiny might put all humanity into the CEV, a selfish programmer might put themselves. A team of selfish programmers might put all of themselves. ( Moral programmers might put themselves in the CEV on the grounds that it is their morality, and their desire to not take over the world. This might be a good idea if the CEV procedure would be seriously screwed up by nutcases. ) But in order for a shareholder value maximizer to be created, the shareholders have to exert meaningful power through several layers of management and on to the programmers. When each layer has a huge incentive to cheat.

Actually, I am not sure what a "libertarian" future means in this context. I expect that the world will be sufficiently different that political philosophies just won't apply. Any friendly AI can take in a utility function and decide what policies maximise it. Worry about what utility function is being maximized, and leave the AI to figure out how to maximize it. (The answer will probably not be communism, libertarianism or whatever your favourite political philosophy is. Within the context of medevel farms, an argument about whether oxen or donkeys are better at ploughing fields is entirely meaningful. Don't expect the nanotech wielding super intelligence to be on the pro oxen or pro donkey side. Obviously one side must be better, and the AI could figure it out, but the AI will figure out some third alternative that makes the other two options look very similar. )