A 1/2 chance of an egoist A.I. is quite possible. At this point, I don't pretend that my assertion of three equally prevalent moral categories is necessarily right. The point I am trying to ultimately get across is that the possibility of an Egoist Unfriendly A.I. exists, regardless of how we try to program the A.I. to be otherwise, because it is impossible to prevent the possibility that an A.I. Existential Crisis will override whatever we do to try to constrain the A.I.
The point I am trying to ultimately get across is that the possibility of an Egoist Unfriendly A.I. exists, regardless of how we try to program the A.I. to be otherwise, because it is impossible to prevent the possibility that an A.I. Existential Crisis will override whatever we do to try to constrain the A.I.
Ok. This is a separate claim, and a distinct one. So, what do you mean by "impossible to prevent". And what makes you think that your notion of existential crisis should be at all likely? Existential crises occur to a large part in humans...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.