You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Elo comments on Open Thread, May 25 - May 31, 2015 - Less Wrong Discussion

3 Post author: Gondolinian 25 May 2015 12:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (301)

You are viewing a single comment's thread. Show more comments above.

Comment author: Elo 28 May 2015 11:18:26AM 0 points [-]

Yes; I don't think I was getting your point.

Also I am not sure that you were getting my point. If in the future the choice to do away with consciousness is made; it will be made by future entities with much more information and clearer reasons for doing so. Without that future information and reasoning at our disposal; We can't really criticize the decision. I can confidently say that my consciousness (based on what I know) does not want to be gotten rid of right now. If reasons overpoweringly convincing come along to change my mind then I will make that decision at that time with the best of information at the time.

My point was that the decision making process is up to the future self and is dependent on future information. The future self will not be making worse decisions. It will not make decisions that do not benefit itself (based on a version of your values right now that are slightly different..

does that make sense? Or should I try to explain it again...?

Comment author: Eitan_Zohar 28 May 2015 11:52:12AM *  0 points [-]

You're definitely missing the point of the whole thing. Suppose that the optimal design for gaining knowledge is something like this (a vast supercomputer without the slightest bit of awareness or emotion.)

I think it is very unlikely- even in the worst case scenarios, I can't imagine that superintelligence wouldn't inherit some sort of value.

Comment author: Elo 28 May 2015 09:03:04PM -1 points [-]

I don't see the problem with that being the eventual case. Death of the state of the world as we know it yes; but the existence of a new entity. That's the way the cookie crubles.