Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
It's clear that if you put someone in very similar situations and ask them to make a choice, over time they will converge to making a certain choice a certain percentage of the time. That could easily be the same percentage of the time that would be predicted by deterministic physics plus e.g. quantum uncertainty, so I don't see any reason in principle why your account of free will could not be consistent with everything happening according to the laws of physics, if there is randomness in the laws of physics.
As for the feeling, if a deterministic chess computer had feelings, it would have to have the feeling that it could make any move it wanted, because if it didn't feel that way, it couldn't consider all the possibilities, and it can't decide on a move without considering all the possibilities. This doesn't prevent chess computers from being deterministic, so it might not prevent you from having a feeling like that, even if your actions are in fact deterministic.
...I'm not seeing this. It can consider all the possibilities even if it knows that it must play the possibility with the highest odds of winning - in fact, knowing that means that it must consider all the possibilities in order to calculate those odds, surely?