CronoDAS comments on How would not having free will feel to you? - Less Wrong

4 Post author: shminux 20 June 2013 08:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: CronoDAS 21 June 2013 04:16:12AM 1 point [-]

Interesting, so just going with the flow and not knowing what might happen next would feel like more free will to you? That seems almost like the opposite of what kalium suggests.

::follows link::

the main difference is that I would do things without a need to exert "willpower," and with less internal monologue/debate.

"Willpower" and "internal monologue/debate" seem like processes that reflect uncertainty about future actions - there's a subjective sense that it's possible that I could have chosen to do something else. I'm not sure I see any difference, really.

Comment author: DSherron 21 June 2013 09:47:17PM 1 point [-]

It's explicitly opposed to my response here. I feel like if I couldn't predict my own actions with certainty then I wouldn't have free will (more that I wouldn't have a will than that it wouldn't be free, although I tend to think that the "free" component of free will is nonsense in any case). Incidentally, how do you imagine free will working, even just in some arbitrary logically possible world? It sounds a lot like you want to posit a magical decision making component of your brain that is not fully determined by the prior state of the universe, but which also always does what "you" want it to. Non-determinism is fine, but I can't imagine how you could have the feeling of free will without making consistent choices. Wouldn't you feel weird if your decisions happened at random?

Comment author: CronoDAS 22 June 2013 10:52:46PM 2 points [-]

I sort of think of "agent with free will" as a model for "that complicated thing that actually does determine someone's actions, which I don't have the data and/or computational capacity to simulate perfectly." Predicting human behavior is like predicting weather, turbulent fluid flow, or any other chaotic system: you can sort of do it, but you'll start running into problems as you aim for higher and higher precision and accuracy.

Does that make any sense? (I'm not sure it does.)

Comment author: DSherron 23 June 2013 01:01:37AM 0 points [-]

I don't think it's particularly meaningful to use "free will" for that instead of "difficult to predict." I mean, you don't say that weather has free will, even though you can't model it accurately. Applying the label only to humans seems a lot like trying to sneak in a connotation that wasn't part of the technical definition. I think that your concept captures some of the real-world uses of the term "free will" but that it doesn't capture enough of the usage to help deal with the confusion around it. In particular, your definition would mean that weather has free will, which is a phrase I wouldn't be surprised to hear in colloquial English but doesn't seem to be talking about the same thing that philosophers want to debate.

Comment author: CronoDAS 23 June 2013 01:48:39AM 0 points [-]

I don't mean to imply that being difficult to predict is a sufficient condition for having free will... I'm kind of confused about this myself.