shminux comments on Dissolving the Question - Less Wrong

44 Post author: Eliezer_Yudkowsky 08 March 2008 03:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: SecondWind 27 April 2013 11:51:14PM 7 points [-]

'Free will' is the halting point in the recursion of mental self-modeling.

Our minds model minds, and may model those minds' models of minds, but cannot model an unlimited sequence of models of minds. At some point it must end on a model that does not attempt to model itself; a model that just acts without explanation. No matter how many resources we commit to ever-deeper models of models, we always end with a black box. So our intuition assumes the black box to be a fundamental feature of our minds, and not merely our failure to model them perfectly.

This explains why we rarely assume animals to share the same feature of free will, as we do not generally treat their minds as containing deep models of others' minds. And, if we are particularly egocentric, we may not consider other human beings to share the same feature of free will, as we likewise assume their cognition to be fully comprehensible within our own.

...d-do I get the prize?

Comment author: shminux 28 April 2013 03:26:08AM 1 point [-]

...d-do I get the prize?

You have, in the local currency.

So, you are saying that free will is an illusion due to our limited predictive power?

Comment author: SecondWind 19 May 2013 07:10:35AM 0 points [-]

...hmm.

If we perfectly understood the decision-making process and all its inputs, there'd be no black box left to label 'free will.' If instead we could perfectly predict the outcomes (but not the internals) of a person's cognitive algorithms... so we know, but don't know how we know... I'm not sure. That would seem to invite mysterious reasoning to explain how we know, for which 'free will' seems unfitting as a mysterious answer.

That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.