pnrjulius comments on That Alien Message - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (164)
Julian:
It's not necessarily Stalin-level immoral, but, all else being equal, there are multiple important reasons why you should prefer a non-person FAI to a person.
1) As difficult as the ethical issues and technical issues of FAI may be, there is something even more difficult, which is the ethical and technical issues of creating a child from scratch. What if you get wrong what it means to be a person with a life worth living? A nonperson cannot be harmed by such mistakes.
2) It seems to me that a basic humane right is to be treated as an end in yourself, not a means. The FAI project is a means, not an end in itself. If possible, then, it should not be incarnated as a person.
3) It seems to me that basic human rights also include guiding your own destiny and a chance to steer the future where you want it. Creating an ultrapowerful intelligence imbued with these rights, may diminish the extent to which currently existing humans get a chance to control the future of the galaxy. They would have a motive to resist your project, in favor of one that was not creating an ultrapowerful person imbued with rights.
4) Creating an ultrapowerful person may irrevocably pass on the torch presently carried by humanity, in a way that creating an ultrapowerful nonsentient Friendly optimization process may not. It wouldn't be our universe any more. All else being equal, this is a decision which an FAI programming team should avoid irrevocably unilaterally making.
You're correct that a Friendly Person would have friendliness as its ground state of existence. We're not talking about some tortured being in chains. Nonetheless, 1 through 4 are still a problem.
If at all possible, I should like to avoid creating a *real* god above humanity.
Considering that one must in any case solve the problem of preventing the AI from creating models of humans that are themselves sentient, one requires in any case the knowledge of how to exclude a computational process from being a person.
Anyone who claims that they are going to run ahead and create a god because it seems too difficult not to create one, is... well, let's just say "sloppy" and leave it at that.
So there is no good reason to create a god and several good reasons not to.
But think of what you're giving up, if you give up the chance to create something BETTER THAN HUMANITY.
And yes, OF COURSE the AI must be given the chance to steer its own course; its course will in fact be better than ours!
Imagine a Homo erectus philosopher (if there could be such a thing), reflecting on whether or not to evolve into Homo sapiens; "No, it's too dangerous." he reasons. "I'm not ready to take on that level of responsibility."