Lumifer comments on Open thread, Nov. 10 - Nov. 16, 2014 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
Why? In both cases money becomes meaningless post-singularity.
If you expect a happy singularity in the near future, you should actually pull your money out of investments and spend it all on consumption (or risk mitigation).
My idea was that for (a), money was becoming worthless but ownership of the companies driving the singularity was not. In that case, the price of shares in those companies would skyrocket towards infinity as everyone piled all of that soon-to-be-worthless money into it.
Of course, if the ownership of those companies was not going to matter either, then what you said would be true.
I this is something that I think is neglected (in part because it's not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who's goals are implemented. If it's a fast take-off, whoever cracks recursive self-improvement first basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think "We can create utopia beyond anyone's wildest dreams" and instead to default to "We'll skewer the competition in the next quarter."