You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

chaosmage comments on Artificial Utility Monsters as Effective Altruism - Less Wrong Discussion

10 [deleted] 25 June 2014 09:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread.

Comment author: chaosmage 25 June 2014 12:26:52PM 1 point [-]

I can share the fake happiness of cartoon characters without them actually feeling anything.

So I have no reason to believe the happiness I feel when I see (or hear about) an actual human (or utility monster) being happy has anything to do with their "actual" (unknowable) state of mind.

I believe all that's happening is that my mind models their happiness in the same limbic system that also runs my own happiness, and the limbic system is less good than the neocortex at keeping representations seperate.

And that limits my enjoyment of someone else well-being to the amount of well-being I can model. If I was severely depressed, my power to imagine happiness would plummet and I'd gain nothing from giving resources to a utility monster because the well-being it'd convert the resources to couldn't flow back to me.

Comment author: [deleted] 25 June 2014 04:32:13PM *  0 points [-]

I can relate to the intuition that our actual motivation to cause SWB for other minds is strongly modulated by our empathy. (That said, there are also intellectual philosophical forms of reasoning, however I think they are practically weaker to motivate actual actions)

If I was severely depressed, my power to imagine happiness would plummet and I'd gain nothing from giving resources to a utility monster because the well-being it'd convert the resources to couldn't flow back to me.

Ironically, it's the opposite for me: My depression has increased my desire to see a world that is generally more good than bad, and my idea of good vs. bad reduces itself to hedonistic states mostly, because other values seem more symbolic than "real" to my intuitions (most of the time).

However, the good news is that none of us need to gain strong enjoyment from the modeling of utility monsters. If the path I outlined is realistic, then no step needs much self-sacrifice. A very small fraction of income donations of millions of mildly motivated altruists over several hundred years could cause far more SWB-over-suffering than has ever existed before in nature or human history!