Giles comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: Giles 12 November 2012 06:13:38AM 4 points [-]

Are you sure this is optimal? You seem to have goals but have thrown away three potentially useful tools: reward mechanisms, primate dominance rituals and zero-sum competitions. Obviously you've gained grit.

Comment author: ialdabaoth 12 November 2012 06:20:26AM -1 points [-]

Optimal by what criteria? And what right do I have to assign criteria for 'optimal'? I have neither power nor charisma; criteria are chosen by those with the power to enforce an agenda.

Comment author: Kaj_Sotala 12 November 2012 11:20:34AM 4 points [-]

By the same right that you assign criteria according to which primate dominance rituals or competitive zero-sum exchanges are bad.

Comment author: Giles 12 November 2012 02:12:53PM 0 points [-]

Some people might value occupying a particular mental state for its own sake, but that wasn't what I was talking about here. I was talking purely instrumentally - your interest in existential risk suggests you have goals or long term preferences about the world (although I understand that I may have got this wrong), and I was contemplating what might help you achieve those and what might stand in your way.

Just to clarify - is it my assessment of you as an aspiring utility maximizer that I'm wrong about, or am I right about that but wrong about something at the strategic level? (Or fundamentally misunderstanding your preferences)