eirenicon comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: eirenicon 11 November 2009 08:24:23PM *  0 points [-]

The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.

I'm not talking about old age, I'm talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn't say "cure death" or "cure old age" but "[solve] the problem of death". And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully - but as quickly as possible under that condition.

Having refreshed, I see you've changed the course of your reply to some degree. I'd like to respond further but I don't have time to think it through right now. I will just add that while I don't assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity - but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of "extinction" through a poor turn of phrase.

Comment author: Nick_Tarleton 13 November 2009 03:46:15AM *  2 points [-]

I don't assign intrinsic value to individuals not yet born

Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have 'overriding' values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.)

(Also, that's assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don't want to work that out right now.)

Comment author: rhollerith_dot_com 11 November 2009 08:49:32PM 0 points [-]

The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.

I'm not talking about old age, I'm talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer.

. . .

Having refreshed, I see you've changed the course of your reply to some degree.

I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.)

(And thank you for your reply to my question about your values.)