nshepperd comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong

23 Post author: jimrandomh 09 June 2011 03:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 18 June 2011 08:15:48PM *  1 point [-]

The model is this: assume that if an AI is created, it's because one researcher, chosen at random from the pool of all researchers, has the key insight; and humanity survives if and only if that researcher is careful and takes safety seriously.

I contest this use of the term "safety". If your goal is for humanity to survive, say that your goal is for humanity to survive. Not to "promote safety".

"Safety" means avoiding certain bad outcomes. By using the word "safety", you're trying to sneak past us the assumption "humans remaining the dominant lifeform = good, humans not remaining dominant = bad".

The argument should be over what humans have that is valuable, and how we can contribute that to the future. Not over how humans can survive.

Comment author: nshepperd 19 June 2011 12:43:23AM *  0 points [-]

Well, our distant descendants, whether uploads or cyborgs or other life-forms, could be considered part of "generalized humanity", as long as they retain what humans have that is valuable.

And regardless, we certainly want current humanity (that is, all the people alive now) to survive, in the sense of not being killed by the AI.

My point being, it's not necessarily right to take "the survival of humanity" to mean that we have to retain this physical form, and I don't think the OP was using the words in that sense.