Tim_Tyler comments on Qualitative Strategies of Friendliness - Less Wrong

10 Post author: Eliezer_Yudkowsky 30 August 2008 02:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 31 August 2008 03:20:53AM -1 points [-]

In what sense is the descendant (through many iterations of redesign and construction) of an AI solely focused on survival, constructed by some *other* human my descendant or yours?

It is not realistic to think that one human can construct an AI. In the hypothetical case where someone else successfully did so, they might preserve some of my genes by preserving their own genes - e.g. if they were a relative of mine - or they may include my genes in an attempt to preserve some biodiversity.

Where does your desire come from?

I am the product of an evolutionary process. All my ancestors actively took steps to preserve their inheritance.

Its achievement wouldn't advance the preservation of your genes (those would be destroyed)

What - all of them? Are you sure? What makes you think that?

Maybe so, but with strong AI the problem [of making uploads] would be quite simple.

You mean to say that its solution would occur more rapidly? That is true, of course. It's the difference between taking mere decades, and being totally intractable (if it were a human project).

In this scenario, uploading would seem to be quite attractive.

To you, maybe. I reckon a poll to see if people would be prepared to have their physical brain destroyed in order to live a potentially-indefinite lifespan in a simulation, would not turn out very well. Show people a video of the brain-scanning process, and the results will be even worse.