red75 comments on Contrived infinite-torture scenarios: July 2010 - Less Wrong

24 Post author: PlaidX 23 July 2010 11:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (188)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 26 July 2010 02:57:27PM 0 points [-]

I think it's creative but wrong to think that an agent could achieve quantum-suicide-level anthropic superpowers by changing how much ve now cares about certain future versions of verself, instead of ensuring that only some of them will be actual successor states of ver patterns of thought.

You can't change your preference. The changed preference won't be yours. What you care about is even more unchangeable than reality. So we don't disagree here, I don't think you can get anthropic superpowers, because you care about a specific thing.

Comment author: red75 26 July 2010 04:35:00PM 1 point [-]

I am still puzzled how preference corresponds to the physical state of brain. Is preference only partially presented in our universe (intersection of set of universes which correspond to your subjective experience and set of universes which correspond to mine subjective experience)?