Determined: Life Without Free Will is a misguided attempt at moral reasoning based on scientific facts. Lacking a philosophical framework that can establish connections between morality and science, the author relied on his own rather lenient intuition without realizing it. One might also say that he is another victim who falls on false philosophical questions. This review contains two parts: a negative part criticizing the book at the principle level and a positive part trying to tackle the dilemma of determinism.
Part I: A book with bipolar attitudes
Sapolsky conceptualizes 'free will' as a governing element inside a body, free from physical laws, thereby qualifying it as supernatural. This intuitive definition is not inherently wrong, albeit not that useful in some philosophical views (I'll come back to this later). He devoted half of the book rigorously disputing against the existence of such a supernatural free will, of which there are many useful scientific insights. This is a view that is already readily embraced by all naturalists - for whom the whole universe is governed by physical law and "natural." And for antinaturalists, it's doubtful that any amount of empirical evidence will change their mind.
What is more problematic is when the book ventures to analyze the moral implications of the nonexistence of such supernatural free will. Had Sapolsky maintained his naturalist rigor, he would have discerned the absence of an established empirical grounding for morality as well, that is, it is not something natural that obeys physical laws (unless one subscribes moral naturalism). If one rejects the whole notion of free will due to the lack of empirical evidence substantiating its existence, they would have no choice but to reject the whole notion of morality on the same ground. This would render any moral proclamations meaningless, of which the book contains an abundance.
If one wants to reason in morality with rigor, they must start with a solid philosophical foundation rather than just their own casual day-to-day moral thinking. One of the first philosophical questions the author should've asked himself might be how morality holds significance without empirical evidence substantiating its existence. Rather, his lack of awareness in this area is utterly disappointing, sometimes to the point of frustration.
Thus, the book treats the two main subjects, free will and morality with completely different attitudes - free will with one based on rigorous naturalist principles and morality with one based on lenient, casual intuitions. Upon such an uneven footing, the moral belief system it aims to build can’t help but be incoherent.
So where exactly did Sapolsky go wrong in his moral reasoning? It’s his confusion in identities. Let me explain. We start by analyzing his statement that you do not deserve anything because you don’t have free will. To highlight the issue, let me rewrite “you don’t have free will” as “your decision apparatus isn’t free from deterministic physical laws”. I believe this is what Sapolsky means rather than “you don’t have a neuron free from deterministic physical laws”. So the statement becomes “you don’t deserve anything because your decision apparatus isn’t free from deterministic physical laws.” The crucial ambiguity in this statement lies in whether 'you' and 'your decision apparatus' can be meaningfully distinguished from each other for the purposes of moral judgment. To put it another way, what is the identity of “you”? Is it just “your decision apparatus” or something else? The obvious choice for most naturalists is that there is no distinction, your identity is synonymous with the activities in your brain, i.e., your decision apparatus, parallel to how an advanced AI is indistinguishable from its software program. If this is the position Sapolsky takes, as he seems to for much of his book, then the statement should be “your decision apparatus doesn’t deserve anything because it isn’t free from deterministic physical laws,” which is apparently problematic - should we not first understand how a “decision apparatus” might have or lack deservingness before evaluating the relevance of deterministic physical laws? If someone makes such a statement about an AI program - “An AI program doesn’t deserve anything because it isn’t free from deterministic physical laws”, a natural reaction would be “why/how does an AI program deserve anything in the first place?” A naturalist such as Sapolsky has to find a justification or basis for deservingness without concerning the notion of “freedom from physical laws.” Such a justification for deservingness will render the original statement false. Conversely, if no justification is found, then there is no deservingness to begin with, which also nullifies the original statement. Therefore this is a dead end for the statement, which leaves us the only other choice - a meaningful distinction exists between "you" as an entity capable of being morally judged and "your decision apparatus." It seems that the statement makes sense only when interpreted this way -"You" cannot be morally judged because "you" don't have control over "your decision apparatus." (This interpretation would make no sense whatsoever if "you" is synonymous to "your decision apparatus." That would be saying that your decision apparatus doesn't deserve anything because your decision apparatus has no control over your decision apparatus.) What, then, could constitute this distinction if not a form of dualism—with "you" representing the immaterial aspect, and "your decision apparatus" the physical brain?
Thus, it’s clear that Sapolsky, who consistently refutes dualism in the first half of his book, inadvertently leans into it due to a lack of clarity regarding identities when discussing morality.
Part II: A neopragmatist's take on the dilemma of determinism
Concepts are not defined based on their truthfulness, i.e how accurately they represent reality, instead, they are defined based on practical usefulness for our goals. For example, the concept of “chair” is very useful for human beings that can sit, but imagine a world with plenty of chair shaped objects and yet no animals that can sit, the concept of “chair” would be useless and not exist in the first place. With the advancement of modern science, humans have been able to introduce more and more concepts such as cell, proton and black hole, that aim to represent elements in nature more accurately. But for neopragmatists, it's a mistake to take the accuracy of representation as the end. In fact scientists themselves, especially those who work in the micro dimensions, have learned to treat concepts as tools (their end is better prediction of measurements), unbothered by the lack of representations.
Similarly, we do not need to give the notion "free will" a single definition that represents something in nature - e.g. a neuron free from physical laws. Such a definition of free will is isolated and useless in many contexts where it disconnects from the other concepts based on “free will” but yet to be also redefined to represent something in nature. Instead, by investigating the practical and historical context of the notion of "free will", I believe we have a better chance of resolving the so-called dilemma of determinism.
When determining the moral responsibility borne by an individual's certain behavior, we take into consideration two factors: the number of alternative behavior possibilities available for them to choose from and the sophistication level of their volition process - the internal process of reasoning and choosing between those possibilities. Everything else being equal, the individual is more morally responsible if their volition process is at a higher level of sophistication. A child is less morally responsible than a grown-up thanks to this reasoning. On the other hand, given the same level of volition sophistication, the individual is less morally responsible if there are fewer alternative behavior possibilities. An impoverished starving man stealing food is judged less morally responsible than a wealthy man for the same behavior. An individual facing a single possible behavior choice is not morally responsible for that behavior. These two moral intuitions suffice for all our practical reasoning regarding moral responsibility. Note that they naturally do not concern whether volition itself has “free” alternatives or is deterministic, only the level of sophistication of it.
So, why is it intuitive that a deterministic universe and moral responsibility is at odds with each other?
I think it comes from three confusions.
The first confusion is conceptual. It is the confusion between the behavior possibilities post volition process with behavior possibilities before it. To better demonstrate what I mean, I’ll use a simplified scenario of a starving impoverished man deciding whether to steal the food. The man faces two choices - to steal or to starve to death. These are the behavior possibilities before his volition process. They are what matters when it comes to moral responsibility judgment. A deterministic world means a deterministic volition process, which in turn means there is only one possible outcome of it and one possible behavior the man ends up choosing. But this did not in any way change the fact the man had two choices to begin with before he decides. It does not reduce his choices to one. People who claim that determinism renders any decision process useless since there is no choice are confusing the choice possibilities before decision with the fixed possibility of one decision outcome. They reject determinism based on the absurdity of all decisions being useless, but the absurdity is really from the confusion, not determinism. This is apparent when we consider a robot agent. Imagine a robot programmed to make behavioral decisions on its own. No one has any problem that the program, hence the robot’s volition process, is deterministic. No one would suggest that it’s useless for this robot to make any decisions due to its deterministic nature.
The second confusion is historical. It originated when Christian philosophy introduced the term “free will” (liberum arbitrium) in the 4th century, which traditionally meant the lack of necessity in human will, resembles the idea of a non-deterministic volition process. This “free will,” I speculate, was introduced because the aforementioned moral intuitions are not satisfactory (to this Christian philosopher) due to their lack of causal relationship between volition and responsibility. The “lack of necessity in human will” sounds more causal and logical to explain why one has to be responsible for his behavior (it is not). Regardless of this speculative motivation, this historical notion of “free will” was actually introduced AFTER our moral intuitions. It’s not the other way around. If one gets this order confused, they will be tempted to believe that our moral intuitions are based on such a notion of “free will” and thus require it, causing a perceived contraction between determinism and our moral intuitions.
The third confusion is linguistic. The original meaning of “free will” gradually got lost for laypersons and became closely resembling the term volition. In day-to-day language, when people say “free will” they are referring to volition - the ability to reason and choose between alternative behaviors, free from coercion but not something free from physical laws. Volition is something that can be intuitively verified through introspection. So when people intuitively believe in the existence of “free will” they often conflate with volition. The opposite happens in the aforementioned intuitions on moral responsibilities - they think the degree of their "free will", mistaken for the original sophistication level of volition, is a source of moral responsibility. In philosophical discourse, though, the definition of “free will” is "clarified" back with the historical value, while the language of the intuitions remained unchanged. This is a grave mistake since they are very distinct entities. One can feel their volition but they cannot feel the physical laws governing their microscopic parts, let alone being free from them. But with this linguistic confusion between the two terms, all of a sudden, they have intuitions contradicting determinism.
Thus, the dilemma of determinism is a false philosophical question caused by the three confusions above.
Determined: Life Without Free Will is a misguided attempt at moral reasoning based on scientific facts. Lacking a philosophical framework that can establish connections between morality and science, the author relied on his own rather lenient intuition without realizing it. One might also say that he is another victim who falls on false philosophical questions. This review contains two parts: a negative part criticizing the book at the principle level and a positive part trying to tackle the dilemma of determinism.
Part I: A book with bipolar attitudes
Sapolsky conceptualizes 'free will' as a governing element inside a body, free from physical laws, thereby qualifying it as supernatural. This intuitive definition is not inherently wrong, albeit not that useful in some philosophical views (I'll come back to this later). He devoted half of the book rigorously disputing against the existence of such a supernatural free will, of which there are many useful scientific insights. This is a view that is already readily embraced by all naturalists - for whom the whole universe is governed by physical law and "natural." And for antinaturalists, it's doubtful that any amount of empirical evidence will change their mind.
What is more problematic is when the book ventures to analyze the moral implications of the nonexistence of such supernatural free will. Had Sapolsky maintained his naturalist rigor, he would have discerned the absence of an established empirical grounding for morality as well, that is, it is not something natural that obeys physical laws (unless one subscribes moral naturalism). If one rejects the whole notion of free will due to the lack of empirical evidence substantiating its existence, they would have no choice but to reject the whole notion of morality on the same ground. This would render any moral proclamations meaningless, of which the book contains an abundance.
If one wants to reason in morality with rigor, they must start with a solid philosophical foundation rather than just their own casual day-to-day moral thinking. One of the first philosophical questions the author should've asked himself might be how morality holds significance without empirical evidence substantiating its existence. Rather, his lack of awareness in this area is utterly disappointing, sometimes to the point of frustration.
Thus, the book treats the two main subjects, free will and morality with completely different attitudes - free will with one based on rigorous naturalist principles and morality with one based on lenient, casual intuitions. Upon such an uneven footing, the moral belief system it aims to build can’t help but be incoherent.
So where exactly did Sapolsky go wrong in his moral reasoning? It’s his confusion in identities. Let me explain. We start by analyzing his statement that you do not deserve anything because you don’t have free will. To highlight the issue, let me rewrite “you don’t have free will” as “your decision apparatus isn’t free from deterministic physical laws”. I believe this is what Sapolsky means rather than “you don’t have a neuron free from deterministic physical laws”. So the statement becomes “you don’t deserve anything because your decision apparatus isn’t free from deterministic physical laws.” The crucial ambiguity in this statement lies in whether 'you' and 'your decision apparatus' can be meaningfully distinguished from each other for the purposes of moral judgment. To put it another way, what is the identity of “you”? Is it just “your decision apparatus” or something else? The obvious choice for most naturalists is that there is no distinction, your identity is synonymous with the activities in your brain, i.e., your decision apparatus, parallel to how an advanced AI is indistinguishable from its software program. If this is the position Sapolsky takes, as he seems to for much of his book, then the statement should be “your decision apparatus doesn’t deserve anything because it isn’t free from deterministic physical laws,” which is apparently problematic - should we not first understand how a “decision apparatus” might have or lack deservingness before evaluating the relevance of deterministic physical laws? If someone makes such a statement about an AI program - “An AI program doesn’t deserve anything because it isn’t free from deterministic physical laws”, a natural reaction would be “why/how does an AI program deserve anything in the first place?” A naturalist such as Sapolsky has to find a justification or basis for deservingness without concerning the notion of “freedom from physical laws.” Such a justification for deservingness will render the original statement false. Conversely, if no justification is found, then there is no deservingness to begin with, which also nullifies the original statement. Therefore this is a dead end for the statement, which leaves us the only other choice - a meaningful distinction exists between "you" as an entity capable of being morally judged and "your decision apparatus." It seems that the statement makes sense only when interpreted this way -"You" cannot be morally judged because "you" don't have control over "your decision apparatus." (This interpretation would make no sense whatsoever if "you" is synonymous to "your decision apparatus." That would be saying that your decision apparatus doesn't deserve anything because your decision apparatus has no control over your decision apparatus.) What, then, could constitute this distinction if not a form of dualism—with "you" representing the immaterial aspect, and "your decision apparatus" the physical brain?
Thus, it’s clear that Sapolsky, who consistently refutes dualism in the first half of his book, inadvertently leans into it due to a lack of clarity regarding identities when discussing morality.
Part II: A neopragmatist's take on the dilemma of determinism
Concepts are not defined based on their truthfulness, i.e how accurately they represent reality, instead, they are defined based on practical usefulness for our goals. For example, the concept of “chair” is very useful for human beings that can sit, but imagine a world with plenty of chair shaped objects and yet no animals that can sit, the concept of “chair” would be useless and not exist in the first place. With the advancement of modern science, humans have been able to introduce more and more concepts such as cell, proton and black hole, that aim to represent elements in nature more accurately. But for neopragmatists, it's a mistake to take the accuracy of representation as the end. In fact scientists themselves, especially those who work in the micro dimensions, have learned to treat concepts as tools (their end is better prediction of measurements), unbothered by the lack of representations.
Similarly, we do not need to give the notion "free will" a single definition that represents something in nature - e.g. a neuron free from physical laws. Such a definition of free will is isolated and useless in many contexts where it disconnects from the other concepts based on “free will” but yet to be also redefined to represent something in nature. Instead, by investigating the practical and historical context of the notion of "free will", I believe we have a better chance of resolving the so-called dilemma of determinism.
When determining the moral responsibility borne by an individual's certain behavior, we take into consideration two factors: the number of alternative behavior possibilities available for them to choose from and the sophistication level of their volition process - the internal process of reasoning and choosing between those possibilities. Everything else being equal, the individual is more morally responsible if their volition process is at a higher level of sophistication. A child is less morally responsible than a grown-up thanks to this reasoning. On the other hand, given the same level of volition sophistication, the individual is less morally responsible if there are fewer alternative behavior possibilities. An impoverished starving man stealing food is judged less morally responsible than a wealthy man for the same behavior. An individual facing a single possible behavior choice is not morally responsible for that behavior. These two moral intuitions suffice for all our practical reasoning regarding moral responsibility. Note that they naturally do not concern whether volition itself has “free” alternatives or is deterministic, only the level of sophistication of it.
So, why is it intuitive that a deterministic universe and moral responsibility is at odds with each other?
I think it comes from three confusions.
The first confusion is conceptual. It is the confusion between the behavior possibilities post volition process with behavior possibilities before it. To better demonstrate what I mean, I’ll use a simplified scenario of a starving impoverished man deciding whether to steal the food. The man faces two choices - to steal or to starve to death. These are the behavior possibilities before his volition process. They are what matters when it comes to moral responsibility judgment. A deterministic world means a deterministic volition process, which in turn means there is only one possible outcome of it and one possible behavior the man ends up choosing. But this did not in any way change the fact the man had two choices to begin with before he decides. It does not reduce his choices to one. People who claim that determinism renders any decision process useless since there is no choice are confusing the choice possibilities before decision with the fixed possibility of one decision outcome. They reject determinism based on the absurdity of all decisions being useless, but the absurdity is really from the confusion, not determinism. This is apparent when we consider a robot agent. Imagine a robot programmed to make behavioral decisions on its own. No one has any problem that the program, hence the robot’s volition process, is deterministic. No one would suggest that it’s useless for this robot to make any decisions due to its deterministic nature.
The second confusion is historical. It originated when Christian philosophy introduced the term “free will” (liberum arbitrium) in the 4th century, which traditionally meant the lack of necessity in human will, resembles the idea of a non-deterministic volition process. This “free will,” I speculate, was introduced because the aforementioned moral intuitions are not satisfactory (to this Christian philosopher) due to their lack of causal relationship between volition and responsibility. The “lack of necessity in human will” sounds more causal and logical to explain why one has to be responsible for his behavior (it is not). Regardless of this speculative motivation, this historical notion of “free will” was actually introduced AFTER our moral intuitions. It’s not the other way around. If one gets this order confused, they will be tempted to believe that our moral intuitions are based on such a notion of “free will” and thus require it, causing a perceived contraction between determinism and our moral intuitions.
The third confusion is linguistic. The original meaning of “free will” gradually got lost for laypersons and became closely resembling the term volition. In day-to-day language, when people say “free will” they are referring to volition - the ability to reason and choose between alternative behaviors, free from coercion but not something free from physical laws. Volition is something that can be intuitively verified through introspection. So when people intuitively believe in the existence of “free will” they often conflate with volition. The opposite happens in the aforementioned intuitions on moral responsibilities - they think the degree of their "free will", mistaken for the original sophistication level of volition, is a source of moral responsibility. In philosophical discourse, though, the definition of “free will” is "clarified" back with the historical value, while the language of the intuitions remained unchanged. This is a grave mistake since they are very distinct entities. One can feel their volition but they cannot feel the physical laws governing their microscopic parts, let alone being free from them. But with this linguistic confusion between the two terms, all of a sudden, they have intuitions contradicting determinism.
Thus, the dilemma of determinism is a false philosophical question caused by the three confusions above.