DanArmak comments on How to get that Friendly Singularity: a minority view - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
Not at all. It's the only truly valuable thing to do. If I thought I had even a tiny chance of succeeding, or if I had any concrete plan, I would definitely try to build a singleton that would conquer the world.
I hope that the values I would impose in such a case are sufficiently similar to yours, and to (almost) every other human's, that the disadvantage of being ruled by someone else would be balanced for you by the safety from ever being ruled by someone you really wouldn't like.
A significant part of the past discussion here and in other singularity-related forums has been about verifying that our values are in fact compatible in this way. This is a necessary condition for community efforts.
But there isn't, so why bring it up? Unless you have a reason to think some other condition holds that changes the balance in some way. Saying some condition might hold isn't enough. And if some such condition does hold, we'll encounter it anyway while trying to conquer the world, so no harm done :-)
I'm quite certain that we're nowhere near such a hypothetical limit. Even if we are, this limit would have to be more or less exponential, and exponential curves with the right coefficients have a way of fooming that tends to surprise people. Where does Robin talk about this?
Not so much. Multiple FAIs of different values (cooperating in one world) are equivalent to one FAI of amalgamated values, so a community effort can be predicated on everyone getting their share (and, of course, that includes altruistic aspects of each person's preference). See also Bayesians vs. Barbarians for an idea of when it would make sense to do something CEV-ish without an explicitly enforced contract.
You describe one form of compatibility.
How so? I don't place restrictions on values, more than what's obvious in normal human interaction.