If you model causality as existing not between two events, but between an object and it's actions, then you explain the regularity of the universe while also allowing for self-directed entities (i.e. causal chains only have to go back as far as the originating entity instead of the Big Bang).
No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not.
Words like "you" are far more problematic than words like "consciousness" that you eschew.
After all, even a young infant shows unmistakable signs of awareness, while the "I" self-concept doesn't arise until the middle of the toddler stage. The problem with free will is that there is no actual "you" entity to have it. The "you" is simply a conceptual place-holder built up from ideas of an individual body and its sensations.
"There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity. I have no problems about saying that I have "free will" appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices."
HA, both here and in your comments on the previous posts, you have continuously given the impression that you don't know what Eliezer is talking about.
Unknown, I think we'll have to leave that to the judgement of the audience of this blog. Personally, I think Eliezer presented what he's talking about pretty clearly, and at this stage I don't think it does much good to repeat my criticisms of his conclusions, beyond encouraging people to read not just the argument-from-quantum-physics-plus-one's-own-sensory-impressions, but the best neuroscience research results on the biology behind the sensations of choice and "free will".
Eliezer,
You may be referring to my draft paper "THE VIEW FROM NOWHERE THROUGH A DISTORTED LENS: THE EVOLUTION OF COGNITIVE BIASES FAVORING BELIEF IN FREE WILL". I don't think I've bothered to keep the paper online, but I remember you having read at least part of it, and the latest draft distinguishes between "actual control" and "novelist control". I believe earlier drafts referred to "control" and "control*".
I'm really glad to see someone as bright as you discussing free will. Here are some comments on this post:
On the Garden of Forking Paths I said something to the effect (can't find the post now): Mathematicians, because they strictly define their terms, have no difficulty admitting when a problem is too vague to have a solution. Just look at the list of Hilber...
Remarkable. Even the people who speculate that philosophers are deliberately not solving problems are refusing to carry out the necessary first step to solving them: defining their terms.
What quirk of human psychology could be responsible for this behavior?
Kip Werking, I can see where you're coming from, but "free will" isn't just some attempt to escape fatalism. Look at Eliezer's post: something we recognize as "free will" appears whenever we undergo introspection, for example. Or look at legal cases: acts are prosecuted entirely differently if they are not done of one's "free will", contracts are annulled if the signatories did not sign of their own "free will". We praise good deeds and deplore evil deeds that are done of one's own "free will". Annihilation of free will requires rebuilding all of these again from their very foundations - why do so, then, when one may be confident that a reasonable reading of the term exists?
Let's assume I can make a simulated world with lots of carefully scripted NPC's and with a script for the Main Character (full of interesting adventures like saving the galaxy), which somehow is forced upon a conscious being by means of some "exoself". Then I erase my memory and cease to be my old self, becoming this MC. Each of my actions is enforced by the exoself, I cannot do a single thing that isn't in the script. But of course I'm unaware of that (there are no extremely suspiciously unexplainable actions in the script) and still have all of...
You essentially posit a "decision algorithm" to which you ascribe the sensations most people attribute to free will. I don't think this is helpful and it seems like a cop-out to me. What if the way the brain makes decisions doesn't translate well onto the philosophical apparatus of possibility and choice? You're just trading "suggestively named LISP tokens" for suggestively named algorithms. But even if the brain does do something we could gloss in technical language as "making choices among possibilities" there still aren't r...
Usually I don't talk about "free will" at all, of course! That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition.
Boy, have we ever seen that illustrated in the comments on your last two posts; just replace "know" with "care". I think people have been reading their own interpretations into yours, which is a shame: your explanation as the experience of a decision algorithm is more coherent and illuminating than my previous articulation of the feeling of free will ...
There are many terms and concepts that don't pay for themselves, though we might not agree on which ones. For example, I think Goedel's Theorem is one of them... its cuteness and abstract splendor doesn't offset the dumbness it invokes in people trying to apply it. "Consciousness" and "Free Will" are two more.
If the point here is to remove future objections to the idea that AI programs can make choices and still be deterministic, I guess that's fair but maybe a bit pedantic.
Personally I provisionally accept the basic deterministic red...
Kip Werking:
A. On your definition, free will is something that people uncontroversially have. Nobody ever doubted that people have the sort of local control you discuss. Nobody ever doubted that people are more like rocks than computers. So, compatibilist definitions of free will are boring, and odd, to me for at least that reason.
I say, "Hooray, I made it add up to normality!" Philosophy should be as normal as possible, but no more normal than that.
Yes, I probably was referring to your paper.
Thus, if you're willing to say that God, souls, etc., do not exist, but draw the line and say "wait a minute, I'm willing to deny the existence of all of these other absurdities, but I'm not going to give you free will. [Maybe adding: that cuts too close]. I'm even willing to redefine the term, as Dennett does, before admitting defeat", then you fit Tamler Sommer's wonderful observation that "[p]hilosophers who reject God, Cartesian dualism, souls, noumenal selves, and even objective morality, cannot bring themselves to do the same for the concepts of free will and moral responsibility." There seems to be some tension here.
The primary thing I want to save i...
"If I understand you correctly on calling the feeling of deliberation an epiphenomenon, do you agree that those who report deliberating on a straightforward problem (say, a chess problem) tend to make better decisions than those who report not deliberating on it? Then it seems that some actual decision algorithm is operating, analogously to the one the person claims to experience."
"However, what I think Eliezer might reply to this is that there still is a process of deliberation going on; the ultimate decision does tend to achieve our goals ...
"The primary thing I want to save is the sensation of freedom"
"So you can see why I might want to rescue even "free will" and not just the sensation of freedom; what people fear, when they fear they do not have free will, is not the awful truth."
Eliezer, I think your desire to preserve the concept of "freedom" is conflicting [or at the least has the potential to conflict] with your desire to provide the best models of reality.
"Fear of being manipulated by an alien is common-sensically in a whole different class ...
HA:
Those are interesting empirical questions. Why jump to the conclusion?
I didn't claim it was a proof that some sort of algorithm was running; but given the overall increased effectiveness at maximizing utility that seems to come with the experience of deliberation, I'd say it's a very strongly supported hypothesis. (And to abuse a mathematical principle, the Church-Turing Thesis lends credence to the hypothesis: you can't consistently compete with a good algorithm unless you're somehow running a good algorithm.)
Do you have a specific hypothesis you thin...
Eliezer,
"I am not attached to the phrase "free will", though I do take a certain amount of pride in knowing exactly which confusion it refers to, and even having saved the words as still meaning something. Most of the philosophical literature surrounding it - with certain exceptions such as your own work! - fails to drive at either psychology or reduction, and can be discarded without loss."
Your modesty is breathtaking!
"Fear of being manipulated by an alien is common-sensically in a whole different class from fear of being determin...
Kip, the problem word is simply before. Your destiny is fixed, but it is not fixed before you were born. If you look at it timelessly, the whole thing just exists; if you do look at it timefully, then of course the future comes after the present, not before it, and is caused by your decisions.
Eliezer,
The subtle ambiguity here is between two meanings of "is fixed":
I think you were are interpreting me to mean 1. I only meant 2, and that's all that I need. That the future is fixed2, before I am born, is what disturbs people, regardless of when the moment of fixing1 happens (if any).
KTW
Yudowsky: My problem with this view is I don't know when other people/entities are "making choices". Is my computer making a choice to follow my suggestion on which operating system to boot up each time it is switched on (some times it gets impatient and makes up its own mind!)? Is it a moral decision maker? If not, at what sophistication does it become so.
Lots of this has been covered by Dennett, although I am not quite sure how much of his philosophy you take on. E.g. are you a Realist (capital R) for beliefs?
Your destiny is fixed, but it is not fixed before you were born.
No. [<-- Useless single-line flat contradiction that can be deleted without affecting any of the actual arguments. EY.] You like to claim that "determined does not mean predetermined", but of course that's precisely what it means. If the state at time 10 is determined by time 9, and time 9 by time 8, etc., then it follows that the state at time 10 is completely derived from the state at time 1. The only way events can be not predetermined is if they're not determined by the...
The laws of physics are symmetrical, and if the future can be known perfectly given the past, so too, the past can be perfectly known given the future.
If you're going to ignore the causal structure and step outside time and look over the Block Universe, then you might as well say that the past was already determined 50 years later.
You might as well say that you can't possibly choose to run into the burning orphanage, because your decision was fully determined by the future fact that the child was saved.
If you are going to talk about time, the future comes after the present.
If you are going to talk about causal structure, the present screens off the past.
If you are going to throw that away and talk about determinism, then you're talking about a timeless mathematical object and it makes no sense to use words like "before".
The laws of physics are symmetrical, and if the future can be known perfectly given the past, so too, the past can be perfectly known given the future.IF.
Which is the key to the matter. Whether the future is defined by the past is irrelevant - what matters is whether the nature of that determination can be known. And it can't.
So your 'if' clause fails, because it posits an impossible event. The future cannot be known.
And that's the key to understanding why we speak of choice. We can easily comprehend how one of our machines works, and we easily see t...
"You might as well say that you can't possibly choose to run into the burning orphanage, because your decision was fully determined by the future fact that the child was saved."
I don't see how that even begins to follow from what I've said, which is just that the future is fixed2 before I was born. The fixed2 future might be that I choose to save the child, and that I do so. That is all consistent with my claim; I'm not denying that anyone chooses anything.
"If you are going to talk about causal structure, the present screens off the past.&...
No. [<-- Useless single-line flat contradiction that can be deleted without affecting any of the actual arguments. EY.]
Caledonian,
"Wrong. [etc.]"
would probably have gotten the same edit, but I suspect
"On the contrary, [etc.]" "I disagree. [etc.]" "I see a flaw in your reasoning: [etc.]"
would have passed without comment. If Eliezer's edits are bothering you, you might consider trying these or other formulations for your thesis statement.
But human minds are too complex for us to do this - so we attribute its operations, which we cannot comprehend, to chance. Ignorance is the key.
I'd agree that ignorance is the key, but not that people in general attribute the operations of the human mind to chance. Rather, it seems we attribute these operations to a noumenal essence that is somehow neither deterministic nor random. We make this attribution so readily (and not just to other human minds) because that is our internal experience -- the feeling of our algorithm from the inside.
Unfortunately, even non-specialists have no difficulty tracing causal chains well into the past.
"Screens off" is a term of art in causal modeling. Technically, the present D-separates the past and future. This is visible in e.g. the counterfactual statement, "If the past changed but the present was held fixed, the future would not change; if the present changed but the past was held fixed, the future would change."
the future is fixed2 before I was born
It makes exactly as much sense to say, "The past is fixed fifty years after I am born."
"I'd agree that ignorance is the key, but not that people in general attribute the operations of the human mind to chance. Rather, it seems we attribute these operations to a noumenal essence that is somehow neither deterministic nor random. We make this attribution so readily (and not just to other human minds) because that is our internal experience -- the feeling of our algorithm from the inside."
Cyan, great comment. You should blog.
When I'm particularly torn on a choice, I flip a coin. But I don't always do what the coin says.
If my initial reaction to the result of the flip is to wish that the coin had come up the other way, then I go against it. If my reaction is relief, then I follow the coin. If I still don't care, then I realize that it really is too close to call, and either go with the coin, or pick some criteria to optimize for.
I don't know if this is telling me what I really want, tapping into unconscious decision making processes, or just forcing me to solidify my views in s...
I sense a strong bias here towards the belief that realism = using negative affect terminology especially by Hopefully and Kip. Hopefully also seems to keep trying to insert his very interesting point about confabulation in place of instead of in addition to Eliezer's points about determinism. The neuropsychology of illusory decision procedures however is disturbing to a different disposition than the existence of a future.
Kip, I think you've misinterpreted 'the present screens off the past'. Think of it this way: if you knew everything there was to know about one instant of a closed system, you'd be able to extrapolate forwards. Knowing about the instant 'before' would afford you no more knowledge. I think that's what Eliezer's trying to convey.
the idea of an alien/God/machine creating me five seconds ago, implanting within me a desire/value to pick up an apple, and then having the local control to act on that desire/value SCARES THE LIVING FU** OUT OF PEOPLE
It really shoul...
'Fearing' determinism (or alien intervention) doesn't make any sense; it's like fearing causality.People aren't motivated by facts, but by their models of facts. If there isn't a strong desire to produce accurate models, people will accept or reject models based on whether their implications are troubling to them. This is a fallacy related to the appeal to consequences - in this error, people conflate the rejection of a model with making the implications of that model untrue.
For example, refusing to reject the idea of an immortal soul because you don't...
I'm no longer overepped in the blog's most recent comments, so:
Michael, "the neuropsychology of illusory decision procedures" is relevant to lines from Eliezer like "There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity" and the paragraphs that followed it in this post, although I agree that it may not be particularly relevant to discussing "the existence of a future".
You can't change what a sensation indicates - it's triggered by its sufficient conditions and that's all. There is always reason to question sensations, because they are developed responses by the organism and thus have no logical connection with the states we hope they indicate.
Eliezer is just rationalizing his desire to stop having to think. He makes some statements about a concept, declares his innate emotional response on the subject to be valid, and ceases to inquire.
Eliezer, on your construal of free will, what content is added to "I chose to phi" by the qualification "of my own free will"?
Q, if you're already known to be human, the content added will usually be along the lines of "I chose to phi, and I wasn't threatened into it, and no one bribed me." :)
However, the part I'm interested in is, "I chose to phi, with an accompanying sensation of uncertainty and freedom."
This is the sensation that is brought into intuitive conflict with the notion of a lawful universe. Since part of becoming a rationalist is learning to be a native of a lawful universe, it's important to understand that the sensation of uncertainty and fre...
I see we're once again trying to delete comments. What fun!
"There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity"
You cannot change what a sensation is triggered by and thus indicates. The properties of the sensation have no necessary relationship with the stimuli that induce it.
We always have reason to doubt the sensations we experience, precisely because they are manufactured by pre-conscious systems whose operation we do not possess an awareness of. ...
Vassar: "The neuropsychology of illusory decision procedures however is disturbing to a different disposition than the existence of a future."
Yes. HA's point about neuroscience and the illusion of "I" is largely orthogonal to EY's discussion concerning choice and determinism. However, the neuroscience that HA references is common knowledge in EY's peer group and is relevant to the topic under discussion...so why doesn't EY respond to HA's point?
(Consider an experiment involving the "hollow face" illusion. The mind's eye sees a...
Fly, great comment. I think the most likely answer is Eliezer isn't as literate in neuroscience/cognitive science as he is in bayesian reasoning, physics, computer programming, and perhaps a few other fields -otherwise examples like the one in your post would find their way as naturally into his posts as examples from the fields he's proficient in. If that's true, the good news is it shouldn't take him any more effort to become literate in neuroscience as in those other fields, and when he is, we'll probably all benefit from his creative approaches to teaching key concepts and to application.
Fly, I highly recommend you start blogging (and critiquing my blog posts when you get a chance)!
Eliezer isn't as literate in neuroscience/cognitive science
I was using neuroscience as my fuel for thinking about AI before I knew what Bayes's Rule was.
The reason I'm not responding in this thread is that things like anosognosia, split-brain experiments, fMRI etc. are orthogonal issues to the classical debate on free will, and if I ever handle it, I'll handle it in a separate post.
For now I'll simply note that if an fMRI can read your decision 10 seconds before you know it, it doesn't mean that "your brain is deciding, not you!" It means that your decision has a cause and that introspection isn't instantaneous. Welcome to a lawful universe.
Cases of apperceptive agnosia, and to a lesser extent brains split at a mature stage of development, provide examples of how apperception, and the apperceptive "I" is in fact relevant to performing typical cognitive functions. I try to be careful not to make sweeping blanket statements about features of experience with a variety of uses or subtle aspects (e.g. "self = illusion"; "perception = illusion"; "judgment = illusion"; "thought = illusion"; "existence = illusion"; "illusions = ???"...
Ben, Great comment. Requests:
Fly/Ben/Eliezer/All,
If you were to have your brain ported to another substrate, would you demand that the neurons that recognise that 'illusory surface' be ported before you could accept that the upload is 'you'? Or would you say that's an unproductive way of looking at it?
The reason I'm not responding in this thread is that things like anosognosia, split-brain experiments, fMRI etc. are orthogonal issues to the classical debate on free will, and if I ever handle it, I'll handle it in a separate post.Actually, the classical debate on that topic seems to be founded on our perception of ourselves as a unified being - when confronted with actions for which we cannot provide a causal explanation, we each say "I chose to do that" - and yet we have good reason to doubt that the systems responsible for making that state...
Caledonian: "Our perceptions, and most especially our mental self-perceptions, are not veridical. Once we acknowledge that we do not need to [do stuff]"
Do you think "our perceptions, and most especially our mental self-perceptions" are completely valueless? If not, where do you draw the line between valid and invalid?
Do you think "our perceptions, and most especially our mental self-perceptions" are completely valueless?Perceptions in general aren't completely useless. Mental perceptions - yes, totally worthless. Introspection tells us nothing of value, because our minds never had a need to accurately represent themselves to any degree and so are not designed to be able to do so. Result: garbage data. Even sensory perception is questionable. Despite its concerning the external, objective world, which clearly produces strong selection pressure for effe...
Caledonian, the trouble with denying any validity at all to introspective perception is that it would imply that consciousness plays no role in valid cognition. And yet consider the elaborate degree of self-consciousness implied by the construction of the epistemology you just articulated! Are you really going to say you derived all that purely from sense perception and unconscious cognition, with no input from conscious reflection?
Caledonian,
Philosophy has developed quite a bit since the Greeks started the Western tradition and I wasn't invoking Greek traditions but I don't recall the ancient skeptics getting very far.
The saying "Scientists need philosophy of science [and epistemology] like birds need ornithology" is true in a practical sense but dismissing the whole topic as irrelevant is unwarranted. Ignoring epistemological issues may be pragmatic depending on one's career but lack of attention doesn't resolve epistemological issues.
Through reason we can use our senses ...
Ignoring epistemological issues may be pragmatic depending on one's career but lack of attention doesn't resolve epistemological issues.Rejecting the concept as incoherent, however, resolves those issues quite nicely. How can you generate knowledge about knowledge without having a definition for the subject matter and a presumed method of generation and evaluation already? You can't consider the questions without taking their answers for granted.
HA, Ben Jones
I appreciate the compliment and your interest in my views, however, for now, I would rather read what others have to say on this topic.
Calderon,
As I said, you can accomplish quite a lot without delving far into the subject but writing it off may leave you with a less-than-optimal framing of reality that just might leave you vulnerable to reaching inaccurate conclusions about important topics like whether to state "all perception is illusion" instead of qualifying the claim before an eccentric who buys it draws conclusions from that premise which make him or her less inclined to try to model reality accurately or act in ways that presume a lawful external world.
Of course we bri...
"perhaps eventually finding no compelling reason not to dissolve increasingly artificial barriers between individual identities."
No thanks, Ben. I've got to wonder, why isn't it enough just to solve aging and minimize existential risk? If I were the administrator of a turing test to see if you were a subjective conscious entity like me, this is the point where you'd fail.
Caledonian, the science of physiology and evolution may have played a large role in the creation of your epistemology, but I don't doubt that you also personally thought about the issues, paid attention to your own thinking to see if you were making mistakes, and so forth. Anyway, there's no need to play the reflexive game of "you would have used introspection on your way to the conclusion that introspection can't be used", in order to combat the notion that introspection is completely unreliable. If it were completely unreliable you would never ...
If it were completely unreliable you would never be accurate even when reporting your own opinions, except perhaps by chance.I didn't say it was completely unreliable. I said it was completely useless.
As the saying goes, a stopped clock is right twice a day, while a working clock is unlikely to be accurate within the limits of measurement. However, the stopped clock is extraordinarily reliable - reliably useless, because it really tells us nothing at all about what time it is. The working clock may not either, but it could at least potentially be a useful guide.
I don't see personal identity v. non-identity as a binary distinction but a fuzzy one.
Agreed, and a view I've espoused here in the past. My question was actually intended to demonstrate this.
We have to ask ourselves what is our strategy for getting around our horribly skewed lenses onto the world, and onto the mind. I just think that saying 'everything we can think is almost certainly wrong' is a bad start. Where do you go from there? What do you compare your pre-conscious sensory perceptual data to to make sure it's correct?
I don't want to have to train myself not to think, and only to measure. That would take all the fun out of it.
"No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not."
Once again, a straw man. Free will might not exist but it won't be disproved by this reasoning. People that claim free will don't claim 100% free will; actions like willing your own birth. Free will proponents generally believe the basis for free will is choosing from among two or more symbolic brain representations. If the person read a book about the...
Phillip Huggan, the wikipedia article on tachyons answers your question. Extremely short version (granting tachyons exist): for a tachyon, there is no distinction between the processes of emission and absorption. The attempt to detect a tachyon from the future (and violate causality) would actually create the same tachyon and send it forward in time.
Caledonian: "I didn't say it was completely unreliable. I said it was completely useless."
I'm surprised you didn't take my second option and moderate your position. Whether you are insisting that introspection is only ever accurate by coincidence, or just that whatever accuracy it possesses is of no practical utility, neither position bears much relationship to reality. The introspective modalities, however it is that they are best characterized, have the same quality - partial reliability - that you attributed to the external senses, and everyon...
Fly,
You're right that if a portion of the brain or CNS had "awareness" or even reflective "consciousness" then the united apperceptive "subject of experience(/thought/action)" might be completely unaware of it. I think the connectionist philosophers Gerard O'Brien and Jon Opie have mentioned that possibility, though I don't think they suggested there was any reason to believe that to be the case. They have written some interesting papers speculating on the evolutionary development of awareness and consciousness. (Btw, Kant ac...
Whether you are insisting that introspection is only ever accurate by coincidence, or just that whatever accuracy it possesses is of no practical utility, neither position bears much relationship to reality.
[You are factually incorrect.] We've tried developing models of human psychology by relying on introspection - it was in fact the first approach taken in the modern age. Most researchers abandoned it soon after it became clear that our experience of our cognition did not in fact permit useful models to be generated. We've had more than a hundred an...
Clearly it's a waste of time to try to have a reasoned debate with someone not even willing to consider one's arguments but rather intent on misrepresenting them as directed toward purposes for which they never were intended to serve (e.g. a fleshed-out psychology or comprehensive analysis of the perceptual system).
It's a shame you haven't read Hume's skeptical critiques of empirical claims of "fact," but as I said before, deep epistemology isn't of interest to everyone and isn't relevant to the vast majority of scientific claims that can be made.
Peace.
Interesting old mindhacks article touches on some of these themes (how we arrive at certainties/decisions):
http://www.mindhacks.com/blog/2008/05/five_minutes_with_ro.html
This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals.
This is a very good definition of free will, and it is way more sensible than claiming to be the "only and ultimate source" of one's own actions, but there is a notion in the Greek and Judaic traditions of being able to rise above one's fate that isn't quite captured by it.
To put this ...
Imagine a ball rolling down a pipe. Att one point the pipe forks, and at that point there is a simple mecanical device that sorts the balls according to size: all balls larger than 4 cm in diameter go left, all smaller ones go right. Let this be the definition of a "choice" (with the device as the agent) for the following argument, and let "you" define a certain arangement of atoms in Eliezers block-universe-with-glue. Then "you" will be what "decides" every time you make a choice; trivially so, given those definitio...
Also, knowing that the book I'm reading is of a deterministic nature, doesn't make me any less interested in knowing how it ends.
Certainly I do not "lack free will" if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.
A question I would like advice from others on:
I frequently find myself in a brain-state where my emotions and morals fail to determine my actions in what most people call the 'usual way'. Essentially, at certain times I am "along for the ride", and have no capacity to influence my behavior until the ride has come to a full and complete...
So I was kind of dissappointed when I read about "the solution to free will" in this article, since I already seemed to have figured out the answer! This I did during the first minute I tried to come up with something, as urged to in dissolving the question.
What I came up with was this: What I perceive as "me" is every component that is cooperating inside me to create this thought- process. My choices are the products of everything that has ever happened to me. I am the result of what has happened before these self-reflections of mine t...
So in this interpretation of the word "free will", even AI would have the same free will humans have?
Am I correct in thinking that I am not the computing machine but the computation itself? If it was possible to predict my behaviour they would have simulate an approximation of me within themselves or within the computer?
I am interested in what the implications this has in how hard or easy it is to manipulate other humans. How increasingly with companies gaining access to a lot of data and computing power can they start to manipulate people at ve...
This post is part of the Solution to "Free Will".
Followup to: Timeless Control, Possibility and Could-ness
Faced with a burning orphanage, you ponder your next action for long agonizing moments, uncertain of what you will do. Finally, the thought of a burning child overcomes your fear of fire, and you run into the building and haul out a toddler.
There's a strain of philosophy which says that this scenario is not sufficient for what they call "free will". It's not enough for your thoughts, your agonizing, your fear and your empathy, to finally give rise to a judgment. It's not enough to be the source of your decisions.
No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not.
But we already drew this diagram:
As previously discussed, the left-hand structure is preferred, even given deterministic physics, because it is more local; and because it is not possible to compute the Future without computing the Present as an intermediate.
So it is proper to say, "If-counterfactual the past changed and the present remained the same, the future would remain the same," but not to say, "If the past remained the same and the present changed, the future would remain the same."
Are you the true source of your decision to run into the burning orphanage? What if your parents once told you that it was right for people to help one another? What if it were the case that, if your parents hadn't told you so, you wouldn't have run into the burning orphanage? Doesn't that mean that your parents made the decision for you to run into the burning orphanage, rather than you?
On several grounds, no:
If it were counterfactually the case that your parents hadn't raised you to be good, then it would counterfactually be the case that a different person would stand in front of the burning orphanage. It would be a different person who arrived at a different decision. And how can you be anyone other than yourself? Your parents may have helped pluck you out of Platonic person-space to stand in front of the orphanage, but is that the same as controlling the decision of your point in Platonic person-space?
Or: If we imagine that your parents had raised you differently, and yet somehow, exactly the same brain had ended up standing in front of the orphanage, then the same action would have resulted. Your present self and brain, screens off the influence of your parents - this is true even if the past fully determines the future.
But above all: There is no single true cause of an event. Causality proceeds in directed acyclic networks. I see no good way, within the modern understanding of causality, to translate the idea that an event must have a single cause. Every asteroid large enough to reach Earth's surface could have prevented the assassination of John F. Kennedy, if it had been in the right place to strike Lee Harvey Oswald. There can be any number of prior events, which if they had counterfactually occurred differently, would have changed the present. After spending even a small amount of time working with the directed acyclic graphs of causality, the idea that a decision can only have a single true source, sounds just plain odd.
So there is no contradiction between "My decision caused me to run into the burning orphanage", "My upbringing caused me to run into the burning orphanage", "Natural selection built me in such fashion that I ran into the burning orphanage", and so on. Events have long causal histories, not single true causes.
Knowing the intuitions behind "free will", we can construct other intuition pumps. The feeling of freedom comes from the combination of not knowing which decision you'll make, and of having the options labeled as primitively reachable in your planning algorithm. So if we wanted to pump someone's intuition against the argument "Reading superhero comics as a child, is the true source of your decision to rescue those toddlers", we reply:
"But even if you visualize Batman running into the burning building, you might not immediately know which choice you'll make (standard source of feeling free); and you could still take either action if you wanted to (note correctly phrased counterfactual and appeal to primitive reachability). The comic-book authors didn't visualize this exact scenario or its exact consequences; they didn't agonize about it (they didn't run the decision algorithm you're running). So the comic-book authors did not make this decision for you. Though they may have contributed to it being you who stands before the burning orphanage and chooses, rather than someone else."
How could anyone possibly believe that they are the ultimate and only source of their actions? Do they think they have no past?
If we, for a moment, forget that we know all this that we know, we can see what a believer in "ultimate free will" might say to the comic-book argument: "Yes, I read comic books as a kid, but the comic books didn't reach into my brain and force me to run into the orphanage. Other people read comic books and don't become more heroic. I chose it."
Let's say that you're confronting some complicated moral dilemma that, unlike a burning orphanage, gives you some time to agonize - say, thirty minutes; that ought to be enough time.
You might find, looking over each factor one by one, that none of them seem perfectly decisive - to force a decision entirely on their own.
You might incorrectly conclude that if no one factor is decisive, all of them together can't be decisive, and that there's some extra perfectly decisive thing that is your free will.
Looking back on your decision to run into a burning orphanage, you might reason, "But I could have stayed out of that orphanage, if I'd needed to run into the building next door in order to prevent a nuclear war. Clearly, burning orphanages don't compel me to enter them. Therefore, I must have made an extra choice to allow my empathy with children to govern my actions. My nature does not command me, unless I choose to let it do so."
Well, yes, your empathy with children could have been overridden by your desire to prevent nuclear war, if (counterfactual) that had been at stake.
This is actually a hand-vs.-fingers confusion; all of the factors in your decision, plus the dynamics governing their combination, are your will. But if you don't realize this, then it will seem like no individual part of yourself has "control" of you, from which you will incorrectly conclude that there is something beyond their sum that is the ultimate source of control.
But this is like reasoning that if no single neuron in your brain could control your choice in spite of every other neuron, then all your neurons together must not control your choice either.
Whenever you reflect, and focus your whole attention down upon a single part of yourself, it will seem that the part does not make your decision, that it is not you, because the you-that-sees could choose to override it (it is a primitively reachable option). But when all of the parts of yourself that you see, and all the parts that you do not see, are added up together, they are you; they are even that which reflects upon itself.
So now we have the intuitions that:
The combination of these intuitions has led philosophy into strange veins indeed.
I once saw one such vein described neatly in terms of "Author" control and "Author*" control, though I can't seem to find or look up the paper.
Consider the control that an Author has over the characters in their books. Say, the sort of control that I have over Brennan.
By an act of will, I can make Brennan decide to step off a cliff. I can also, by an act of will, control Brennan's inner nature; I can make him more or less heroic, empathic, kindly, wise, angry, or sorrowful. I can even make Brennan stupider, or smarter up to the limits of my own intelligence. I am entirely responsible for Brennan's past, both the good parts and the bad parts; I decided everything that would happen to him, over the course of his whole life.
So you might think that having Author-like control over ourselves - which we obviously don't - would at least be sufficient for free will.
But wait! Why did I decide that Brennan would decide to join the Bayesian Conspiracy? Well, it is in character for Brennan to do so, at that stage of his life. But if this had not been true of Brennan, I would have chosen a different character that would join the Bayesian Conspiracy, because I wanted to write about the beisutsukai. Could I have chosen not to want to write about the Bayesian Conspiracy?
To have Author* self-control is not only have control over your entire existence and past, but to have initially written your entire existence and past, without having been previously influenced by it - the way that I invented Brennan's life without having previously lived it. To choose yourself into existence this way, would be Author* control. (If I remember the paper correctly.)
Paradoxical? Yes, of course. The point of the paper was that Author* control is what would be required to be the "ultimate source of your own actions", the way some philosophers seemed to define it.
I don't see how you could manage Author* self-control even with a time machine.
I could write a story in which Jane went back in time and created herself from raw atoms using her knowledge of Artificial Intelligence, and then Jane oversaw and orchestrated her own entire childhood up to the point she went back in time. Within the story, Jane would have control over her existence and past - but not without having been "previously" influenced by them. And I, as an outside author, would have chosen which Jane went back in time and recreated herself. If I needed Jane to be a bartender, she would be one.
Even in the unlikely event that, in real life, it is possible to create closed timelike curves, and we find that a self-recreating Jane emerges from the time machine without benefit of human intervention, that Jane still would not have Author* control. She would not have written her own life without having been "previously" influenced by it. She might preserve her personality; but would she have originally created it? And you could stand outside time and look at the cycle, and ask, "Why is this cycle here?" The answer to that would presumably lie within the laws of physics, rather than Jane having written the laws of physics to create herself.
And you run into exactly the same trouble, if you try to have yourself be the sole ultimate Author* source of even a single particular decision made by you - which is to say it was decided by your beliefs, inculcated morals, evolved emotions, etc. - which is to say your brain calculated it - which is to say physics determined it. You can't have Author* control over one single decision, even with a time machine.
So a philosopher would say: Either we don't have free will, or free will doesn't require being the sole ultimate Author* source of your own decisions, QED.
I have a somewhat different perspective, and say: Your sensation of freely choosing, clearly does not provide you with trustworthy information to the effect that you are the 'ultimate and only source' of your own actions. This being the case, why attempt to interpret the sensation as having such a meaning, and then say that the sensation is false?
Surely, if we want to know which meaning to attach to a confusing sensation, we should ask why the sensation is there, and under what conditions it is present or absent.
Then I could say something like: "This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals."
This is a condition that can fail in the presence of jail cells, or a decision so overwhelmingly forced that I never perceived any uncertainty about it.
There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity. I have no problems about saying that I have "free will" appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices.
Certainly I do not "lack free will" if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.
Usually I don't talk about "free will" at all, of course! That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition. The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window.
But I generally prefer to reinterpret my sensations sensibly, as opposed to refuting a confused interpretation and then calling the sensation "false".