douglas comments on Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong

39 Post author: Eliezer_Yudkowsky 19 October 2007 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (334)

Sort By: Old

You are viewing a single comment's thread.

Comment author: douglas 20 October 2007 10:59:27AM -2 points [-]

Eliezer, I'd like to take a stab at the internal criterion question. One differerence between me and the program you describe is that I have a hoped for future. Say "I'd like to play golf on Wednesday." Now, I could calculate the odds of Wednesday not actually arriving (nuclear war,asteroid impact...), or me not being alive to see it (sudden heartattack...), and I would get an answer greater than zero. Why don't I operate on those non-zero probabilities? (The other difference between me and the program you describe) I think it has to do with faith. That is I have faith that my hoped for future will occur, or at least some semblance of it. I seem to have this faith despite previous losses. Take the field of AI. There is a hoped for future, a computer will demonstrate intelligence, some hope the machine will become conscious. There is a faith that "we can solve these problems" I'm not sure the machine you describe would have either characteristic. I don't know how to formalize this, but it seems an important aspect of the situation.