All of gurgeh's Comments + Replies

gurgeh20

Yes, I admit it scores low on "strange", but it seems to me that if we would have one really hard-wired blind spot, it would be thinking about and fully embracing this. Since "clinical depression", as you put it, can be very counter-productive to reproduction.

gurgeh20

I don't know if this is a common counter-argument or not, but you have to be very careful with your suicide, so that the next most likely outcome is not to give you horrible permanent injuries. It seems to me that if the whole multi-universe theory is correct, then at the end of your life, the next most likely outcome to death is another painful last gasp. And another. And so forth..

Also, many people include the happiness of others in their utility function and a quantum suicide would do harm to your friends and family.

1JamesAndrix
If you have worked out the suicide correctly, you should also make bets that you're going to survive. If you lose, you've lost nothing, and if quantum suicide works then you come out richer. This idea feel a lot like manifesting/affirmations to me.
gurgeh-10

I would use a true quantum random generator. 51% of the time I would take only one box. Otherwise I would take two boxes. Thus Omega has to guess that I will only take one box, but I have a 49% chance of taking home another $1000. My expected winnings will be $1000490 and I am per Eliezer's definition more rational than he.

3RobinZ
This is why I restate the problem to exclude the million when people choose randomly.
gurgeh130

The AI might say: Through evolutionary conditioning, you are blind to the lack of point of living. Long life, AGI, pleasure, exploring the mysteries of intelligence, physics and logic are all fundamentally pointless pursuits, as there is no meaning or purpose to anything. You do all these things to hide from this fact. You have brief moments of clarity, but evolution has made you an expert in quickly coming up with excuses to why it is important to go on living. Reasoning along the lines of Pascal's Wager are not more valid in your case than it was for him... (read more)

1DanielLC
There is a big difference between programing an AI to maximize pleasure and programming an AI to experience pleasure. I want you to tile the universe with orgasmium. A chunk of orgasmium isn't going to do that.
0AGirlAlone
I already believe this. And I feel the closest thing I have to a "meaning/purpose" is the very drive to live, which would be pointless in the eyes of an unsympathetic alien. But I don't feel depressed, just not too happy about this. And the pointlessness and horror of my existence and experience is itself interesting, the realization fun, just like those who love maths for the sake of itself as opposed to other concerns can also be very darkly intrigued by Godel's incompleteness proof, instead of losing heart. Frustrated, yes. But I would not commit suicide or wirehead myself before I understand the correct basis and full implications of this futility, especially this fear of futility. And that understanding may well be impossible, and thus my curiosity circuit will always fire, and defend me from any anti-life proof indefinitely. Could this line of reasoning be helpful to someone with depression? It's how I battled it off. If the above is nonsense to you, I admit I am just doublefeeling. The drive, the fun and the futility are all real to me, corresponding to the wanting, liking and learning aspects of human motivation, and who am I to decide which is human's real purpose? I do not think my opinion is truth, or should be adopted. But in case there's danger of suicide from lack of point, let it be remembered that two of the three aspects can support living, whereas if you forget that the apparent futility is deep and worthy of interest, then you easily end up one against two for survival. Or is it that I am less smart and much more introspective than the average rationalist here, and thus put too little weight in the logical recursive futility and too much in the introspective curiosity and end up with this attitude, while others just survived by being truly blind/dismissive about the end of recursive justification and believe in a real and absolute boundary between motivational and evolutional justifications, like Eliezer seems to do?
4FrankAdamek
This is one thing I actually wouldn't believe. To say that nothing has inherent meaning is not to say that nothing has meaning. I find meaning in things that I enjoy, like a sunset. Or a cake. There is no inherent meaning in them whatsoever. But if I say that I find meaning in something because it brings me pleasure, to be convinced there was not even subjective meaning I would need the AI to convince me that either 1) I don't actually find pleasure in those things or 2) that I don't find meaning in pleasure. In the end, meaning in this sense seems so subjective, it's like the AI trying to convince me that I don't have the sensation of consciousness. Not that there is no 'real' consciousness (which I could accept), but that I do not perceive myself to have consciousness, just as I perceive things to have personal meaning. That there is no meaning because there is no ought-from-is only follows if you require your sense of meaning to have any relation to 'is'. And you didn't get a simpler fitness function because you weren't coded for your pleasure, but for ours. And because we didn't have you around to help us.
0Roko
I think that I have already accepted this from reading Joshua Greene on antirealism.
-1randallsquared
Uh, this is more "obvious" than strange or crazy. It follows from the observation that there is no ought-from-is.