Desrtopa comments on John Danaher on 'The Superintelligent Will' - Less Wrong

5 Post author: lukeprog 03 April 2012 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread.

Comment author: Desrtopa 03 April 2012 02:27:59PM 1 point [-]

And here I was wondering if this was a paper from the esteemed Brazilian jiu jitsu coach (who does in fact have a Masters degree in philosophy.)

Rather than doing pretty much anything, it seems more likely to me that a genuinely nihilistic agent would default to doing nothing.

Comment author: JohnD 04 April 2012 10:27:31AM 1 point [-]

I think that's an interesting point. I suppose I was thinking that nihilism, at least in the way its typically discussed, holds not that doing nothing is rational but, rather, that no goals are rational (a subtle difference, perhaps). This, in my opinion, might equate with all goals being equally possible. But, as you point out, if all goals are equally possible the agent might default to doing nothing.

One might put it like this: the agent would be landed in the equivalent of a Buridan's Ass dilemma. As far as I recall, the possibility that a CPU would be landed in such a dilemma was a genuine problem in the early days of computer science. I believe there was some protocol introduced to sidestep the problem.