You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

PhilipL comments on I played the AI Box Experiment again! (and lost both games) - Less Wrong Discussion

35 Post author: Tuxedage 27 September 2013 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: Ishaan 30 September 2013 10:10:36PM 1 point [-]

I guess it's hard for me to understand this because I view myself immune to mental health harms as a result of horrifying stimuli that I know to be fictional. Even if it's not fictional, the bulk of my emotions will remain unrecruited unless something I care about is being threatened.

It would take quite a lot cash for me to risk an actual threat to my mental health...like being chronically pumped with LSD for a week, or getting a concussion, or having a variable beeping noise interrupt me every few seconds. But an AI box game would fall on a boring-stimulating spectrum, not a mental damage one.

What if another human's happiness was on the line? After you've given a response to that question, qbrf lbhe bcvavba nobhg gur zbarl dhrfgvba punatr vs V cbvag bhg gung lbh pna qbangr vg naq fvtavsvpnagyl rnfr fbzrbar'f fhssrevat? Fnzr zbargnel nzbhag.

Comment author: Gunnar_Zarncke 30 September 2013 10:38:10PM 0 points [-]

Am am quite possible to flood myself with happiness. I do not need LSD for that. And I assume that it can be as addictive. I assume that I am as able to flood myself with sadness and dread. And I fear the consequences. Thus taking LSD or doing the AI box experiment are not that differnt for me. As I said that is my weak spot.

I thought that the answer to the 'other person' question is implied by my post. I'll bear a lot if other people esp. those I care for are suffering. After rot13 I better understand your question. You seem to imply that if I bear the AI experiment some funding will go to suffering people. Trading suffering in a utilitarian sense. Interesting. No. That does seem to weigh up.