timtyler comments on Reflections on a Personal Public Relations Failure: A Lesson in Communication - Less Wrong

37 Post author: multifoliaterose 01 October 2010 12:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 01 October 2010 05:47:26PM *  -1 points [-]

The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction.

P(eventual human extinction) looks enormous - since the future will be engineered. It depends on exactly what you mean, though. For example, is it still "extinction" if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?

Also, what is a "friendly AI"? Say a future machine intelligence looks back on history - and tries do decide whether what happened was "friendly". Is there some decision process they could use to determine this? If so, what is it?

At any rate, the whole analysis here seems misconceived. The "extinction of all humans" could be awful - or wonderful - depending on the circumstances and on the perspective of the observer. Values are not really objective facts that can be estimated and agreed upon.

Comment author: NancyLebovitz 01 October 2010 06:30:17PM 2 points [-]

For example, is it still "extinction" if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?

Or if all humans have voluntarily [1] changed into things we can't imagine?

[1] I sloppily assume that choice hasn't changed too much.