Today's post, Dissolving the Question was originally published on 08 March 2008. A summary (taken from the LW wiki):
Proving that you are confused may not make you feel any less confused. Proving that a question is meaningless may not help you any more than answering it. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it. Ask yourself, as a question of cognitive science, why do humans make that mistake?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Variable Question Fallacies, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Humans naturally do it, but we can learn otherwise. So the question is, why can we learn (and then really feel) that the generator does not have free will, but the same process would not work for humans, especially for ourselves.
First, we are more complex than a random number generator. The random generator is just... random. Now imagine a machine that a) generally follows some goals, but b) sometimes does random decisions, and c) rationalizes all its choices with "I was following this goal" or, in a case of random action: "that would be too much" or "it seemed suspicious" or "I was bored". Perhaps it could have a few (potentially contradicting) goals, and always randomly choose one and do an action that increases this goal, even if it harms the other goals. Even more, it should allow some feedback; for example by speaking with it you could increase a probability of some goal, and then even if it randomly chooses not to follow that goal, it would give some rationalization why. This would feel more like a free will.
On the other hand I imagine that some people with human-predicting powers, like the marketing or political experts do not believe in so much human free will (with regard to their profession's topic; otherwise they compartmentalize), because they are able to predict and manipulate human action.
By the intra-species competition we are complex enough to prevent other people from understaning and predicting us. As a side effect, it makes us incomprehensive and unpredictable to ourselves, too. Generally, we optimize for survival and reproduction, but we are not straightforward in it, because a person with simple algorithm could be simply abused. Sometimes the complexity brings some advantage (for example when we get angry, we act irrationally, but as far as this prevents other people from making us angry, even this irrational emotion is an evolutionary advantage), sometimes the complexity is caused merely by bugs in the program.
We can imagine more than we can really do; for example we can make a plan that feels real, but then we are unable to follow it. But this lie, if it convinces other people (and we better start by convincing ourselves), can bring us some advantage. So humans have an evolutionary incentive to misunderstand themselves -- we do not have such incentive towards other species or machines.
I know this is not a perfect answer, but it is as good as I can give now.