I recommend reading this sequence.
Thanks for recommending.
Suffice it to say that you are wrong, and power does not bring with it morality.
I have never assumed that "power brings with it morality" if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that's what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don't think hedonistic utilitarianism (or hedonism) is moral, it's understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn't prove I'm wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn't understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.
a happy person doesn't hate.
What is your support for this claim?
Observation.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"
It is not about what YOU define as right.
Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also *believe* that more general intelligence make GI converge to such "right thinking".
What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I *believe* that 'non-sentient utility maximizer' is mutually exclusive with 'learning' strong AGI system - in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...
Could reach the same point.
Said Eliezer agent is programmed genetically to value his own genes and those of humanity.
An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.