You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gunnar_Zarncke comments on Open Thread, Jun. 15 - Jun. 21, 2015 - Less Wrong Discussion

5 Post author: Gondolinian 15 June 2015 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (302)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gunnar_Zarncke 15 June 2015 09:16:13PM 0 points [-]

The only optimists in this regard are the people who are glad about this all because it means freedom for them. How they decide if something is important still beats me, perhaps they have an internal value function that does not need to borrow terminal goals from an external source.

An answer I wrote in response to a related question

Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that's enough, right?)

was:

No. Atheism does remove one set of symbol-behavior-chains in your mind, yes. But a complex mind will most likely lock into another better grounded set of symbol-behavior-chains that is not nihilistic but - depending on your emotional setup - somehow connected to terminal values and acting on that.

In a way you compartmentalize the though if missing meaning away as kind of unhelpful noise (that's how I phrased it on the LWCW). This is not unreasonable (ahem) - after all the search for meaning is itself meaningless for a conscious process that has evolved in this meaningless environment.

Comment author: [deleted] 16 June 2015 08:52:10AM *  3 points [-]

Well, this locking does not really seem to work well for me. I know that ideal terminal values should be along the lines of wanting other people to be happy, but I really struggle to go from the fact that some signals in some brains are labelled happiness to the value that these signals matter. Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found. The struggle is largely that if certain brain signals like happiness are not inherently marked with little XML tags "yes you should care about this" where does the should, the value come from?

The closest thing I can get is something similar to nationalism extended over all humankind - we all are 22nd cousins or something so let's be allies and face this cold cruel lifeless universe together or something similarly sentimental. But it isn't a terminal value, it is more like a bit of a feeling of affection. A true utilitarian would even care about a sentient computer being happy, or a sentient computer suffering or dying, and I just cannot figure out why.

Comment author: Gunnar_Zarncke 16 June 2015 03:05:18PM 0 points [-]

Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found.

Well. Thinking about it I realize that for your kind of personality a falling back to carng and following goals indeed doesn't seem necessary. On the other hand the arbitrariness of nihilism isn't that different from the passivity from depression - so in a way maybe you already did lock back into the same pattern anyway?