PhilGoetz comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong

23 Post author: jimrandomh 09 June 2011 03:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread.

Comment author: PhilGoetz 18 June 2011 08:15:48PM *  1 point [-]

The model is this: assume that if an AI is created, it's because one researcher, chosen at random from the pool of all researchers, has the key insight; and humanity survives if and only if that researcher is careful and takes safety seriously.

I contest this use of the term "safety". If your goal is for humanity to survive, say that your goal is for humanity to survive. Not to "promote safety".

"Safety" means avoiding certain bad outcomes. By using the word "safety", you're trying to sneak past us the assumption "humans remaining the dominant lifeform = good, humans not remaining dominant = bad".

The argument should be over what humans have that is valuable, and how we can contribute that to the future. Not over how humans can survive.

Comment author: Will_Sawin 19 June 2011 02:37:23AM 1 point [-]

What is value? What things are valuable, and what are not?

Everything that we know about value, everything that we can know, is encoded within the current state of humanity.

As long as that knowledge remains, there is hope for the Best Possible Future. It may be a future that includes no humans, but it will be a future based on that knowledge.

If that knowledge is destroyed, or it loses power since it is no longer riding inside the dominant life form, then the future will be, morally, as chaos - as likely to eat babies as to love them.

To figure out how we can contribute to the future, what should replace us, and so on, takes time. Time we do not have if we do not focus on safety first.

Comment author: nshepperd 19 June 2011 12:43:23AM *  0 points [-]

Well, our distant descendants, whether uploads or cyborgs or other life-forms, could be considered part of "generalized humanity", as long as they retain what humans have that is valuable.

And regardless, we certainly want current humanity (that is, all the people alive now) to survive, in the sense of not being killed by the AI.

My point being, it's not necessarily right to take "the survival of humanity" to mean that we have to retain this physical form, and I don't think the OP was using the words in that sense.

Comment author: timtyler 18 June 2011 11:45:19PM *  -1 points [-]

"Safety" means avoiding certain bad outcomes. By using the word "safety", you're trying to sneak past us the assumption "humans remaining the dominant lifeform = good, humans not remaining dominant = bad".

The argument should be over what humans have that is valuable, and how we can contribute that to the future. Not over how humans can survive.

Agreed. People seem to get hold of the idea that humans are good, and machines are bad, and then get into an us vs them mindset. Surely all the best possible futures involve an engineered world, where the agony of being a meat brained human who was cobbled together by natural selection is mostly a distant memory.

Comment author: Will_Sawin 19 June 2011 02:38:02AM 0 points [-]

But we have to keep the humans around until humans are capable of engineering that world carefully and without screwing it up. If we don't engineer it, who will?

Comment author: timtyler 19 June 2011 08:29:12AM *  0 points [-]

Right. There are pretty good instrumental reasons for all the parties concerned to do that. Humans may also be useful for a while for rebooting the system - if there is a major setback. They have successfully booted things up once already. Other backup systems are likely to be less well tested.