paper-machine comments on Risks from AI and Charitable Giving - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (126)
That's absolutely false. The terror management theory people, for example, discovered that mortality salience still kicks in even if you tell people that you're going to expose them to something in order to provoke their own feeling of mortality.
EDIT: The paper I wanted to cite is still paywalled, afaik, but the relevant references are mostly linked in this section of the Wikipedia article. The relevant study is the one where the threat was writing about one's feelings on death.
Okay. I possibly mistakenly assumed that the only way I could get answers is to challenge people directly and emotionally. I didn't expect that I could just ask how people associated with SI/LW could possible believe what they believe and get answers. I tried, but it didn't work.