Tim_Tyler comments on Qualitative Strategies of Friendliness - Less Wrong

10 Post author: Eliezer_Yudkowsky 30 August 2008 02:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 30 August 2008 07:27:10PM 0 points [-]

Re: All this talk of moral "danger" and things "better" than us, is the execution of a computation embodied in humans, nowhere else, and if you want an AI that follows the thought and cares about it, it will have to mirror humans in that sense.

"Better" - in the sense of competitive exclusion via natural selection. "Dangerous" - in that it might lead to the near-complete obliteration of our biosphere's inheritance. No other moral overtones implied.

AI will likely want to avoid being obliterated - and will want to consume resources, and use them to expand its domain. It will share these properties with all living systems - not just us. It doesn't need to do much "mirroring" to acquire these attributes - they are a naturally-occurring phenomenon for expected utility maximisers.