Right, the second argument is the one that concerns me, since it should be possible to convince people to adjust their preferences in some way that will make them consistent.
My suggestion here was simply to adopt a hard limit to the utility function. So for example instead of valuing lifespan without limit, there would be some value such that the AI is indifferent to extending it even more. This kind of AI might take the lifespan deal up to a certain point, but it would not keep taking it permanently, and in this way it would avoid driving its probability of survival down to a limit of zero.
I think Eliezer does not like this idea because he claims to value life infinitely, assigning ever greater values to longer lifespans and an infinite value to an infinite lifespan. But he is wrong about his own values, because being a limited being he cannot actually care infinitely about anything, and this is why the lifespan dilemma bothers him. If he actually cared infinitely, as he claims, then he would not mind driving his probability of survival down to zero.
I am not saying (as he has elsewhere described this) that "the utility function is up for grabs." I am saying that if you understand yourself correctly, you will see that you do not yourself assign an infinite value to anything, so it would be a serious and possibly fatal mistake to make a machine that assigns an infinite value to something.
Yeah, I follow. I'll bring up another wrinkle (which you may already be familiar with): Suppose the objective you're maximizing never equals or exceeds 20. You can reach to 19.994, 19.9999993, 19.9999999999999995, but never actually reach 20. Then even though your objective function is bounded, you will still try to optimize forever, and may resort to increasingly desperate measures to eek out another .000000000000000000000000001.
Edge.org has recently been discussing "the myth of AI". Unfortunately, although Superintelligence is cited in the opening, most of the participants don't seem to have looked into Bostrom's arguments. (Luke has written a brief response to some of the misunderstandings Pinker and others exhibit.) The most interesting comment is Stuart Russell's, at the very bottom:
I'd quibble with a point or two, but this strikes me as an extraordinarily good introduction to the issue. I hope it gets reposted somewhere it can stand on its own.
Russell has previously written on this topic in Artificial Intelligence: A Modern Approach and the essays "The long-term future of AI," "Transcending complacency on superintelligent machines," and "An AI researcher enjoys watching his own execution." He's also been interviewed by GiveWell.