Giles comments on How can I reduce existential risk from AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (92)
Some people might value occupying a particular mental state for its own sake, but that wasn't what I was talking about here. I was talking purely instrumentally - your interest in existential risk suggests you have goals or long term preferences about the world (although I understand that I may have got this wrong), and I was contemplating what might help you achieve those and what might stand in your way.
Just to clarify - is it my assessment of you as an aspiring utility maximizer that I'm wrong about, or am I right about that but wrong about something at the strategic level? (Or fundamentally misunderstanding your preferences)