You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Why IQ shouldn't be considered an external factor

2 estimator 04 April 2015 05:58PM

This is a sort-of response to this post.

"Things under your control" (more generally, free will) is an ill-defined concept: you are an entity within physics; all of your actions and thoughts are fully determined by physical processes in your brain. Here, I will assume that "things under your control" are any things that are controlled by your brain, since it is a consistent definition, and it's what people usually mean when they talk about things under one's control.

So, you may be interested in the question: how much one's success depends on his thoughts and actions (i.e. things that are controlled by his brain) vs. how it depends on the circumstances/environment (i.e. things that aren't)? Another formulation: how you can change one's life outcomes if you could alter neural signals emitted by his brain?

We also could draw the borderline somewhere else; maybe add physical traits, like height or attractiveness to the "internal factors" category, or maybe assign some brain parts to the "external factors" category. The question whether your life success is mostly determined by "internal factors" or "external factors" would remain valid -- and we call it "internal vs. external locus of control" question.

But what happens when we assign IQ to the "external factors" category?

IQ test is an attempt to measure some value, which is supposed to be a measure of something like quality of one's thinking process. So, this value can be seen as a function IQ(brain), which maps brains to numbers. Your thoughts and actions don't depend on your IQ score; IQ score depends on your thoughts. That's how the causal arrows are arranged.

But it's possible to ask, what can we change if we can change brain, conditional on the fixed IQ score. But then the "free will" intuition collapses; it's hard to imagine what we could change if our thought processes were restricted in some weird way. And such question is hardly practical, in my opinion. It's true that one can measure his IQ, and that IQ rarely changes much, but still: if you consider IQ fixed and external factor out of your control, then you must consider your thought processes restricted to some set and therefore, not totally under your control.

Define "things under your control" as "things under your brain neural signals' control", and then we will have a consistent definition, and we will find ourselves in the common sense domain. Declare that everything is under control of physics, and then we will, again, have a consistent definition of "things under your control" (empty set), and now we are in the physics domain. Both cases are quite intuitive.

But when we consider IQ external, "things under your control" are your thoughts, but not quite; we can control our thoughts, but only as long as they reside on some weird manifold of thought-space. I guess that in such case, your "free will" intuitions would be disrupted. Basically, we can't slice some part of what we call "personality" out and still have our intuitions about personality and free will sane.

TL; DR: You shouldn't consider any functions of your current brain state as external when discussing locus of control, since such viewpoint is actually counterintuitive and, therefore, makes you prone to errors.

Dennett on the selfish neuron, etc.

8 NancyLebovitz 17 September 2013 05:09PM

Dennett:

Mike Merzenich sutured a monkey's fingers together so that it didn't need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don't have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what's in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don't have a job? Well, they're out of work. They're unemployed, and if you're unemployed, you're not getting your neuromodulators. If you're not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you're going to be really out of work, and then you're going to die.

I hadn't thought about any of this-- I thought the hard problem of brains was that dendrites grow so that neurons aren't arranged in a static map. Apparently that is just one of the hard problems.

He also discusses the question of how much of culture is parasitic, that philosophy has something valuable to offer about free will (I don't know what he has in mind there), the hard question of how people choose who to trust and why they're so bad at it (he thinks people chose their investment advisers more carefully than they chose their pastors, I suspect he's over-optimistic), and a detailed look at Preachers Who Are Not Believers. That last looks intriguing-- part of the situations is that preachers have been taught it's very bad to shake someone else's faith, so there's an added layer of inhibition which keeps preachers doing their usual job even after they're no longer believers themselves.

The scope of "free will" within biology?

15 Jay_Schweikert 29 June 2011 06:34AM

I've recently read through Eliezer's sequence on "free will", and I generally found it to be a fairly satisfying resolution/dissolution of the many misunderstandings involved in standard debates about the subject. There's no conflict between saying "your past circumstances determined that you would rush into the burning orphanage" and "you decided to rush into the burning orphanage"; what really matters is the experience of weighing possible options against your emotions and morals, without knowledge of what you will decide, rather than some hypothetical freedom to have done something different, etc. Basically, the experience of deciding between alternatives is real, don't worry too much about nonsense philosophical "free will" debates, just move on and live your life. Fine.

But I'm trying to figure out the best way to conceptualize the idea that certain biological conditions can "inhibit" your "free will," even under a reductionist understanding of the concept. Consider this recent article in The Atlantic called "The Brain on Trial." The basic argument is that we have much less control over ourselves than we think, that biology and upbringing have tremendous influences on our decisions, and that the criminal justice system needs to account for the pervasiveness of biological influence on our actions. On the one hand, duh. The article treats the idea that we are "just" our biology as some kind of big revelation that has only recently been understood:

The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.

Is that because we've just now discovered reductionism? If we weren't "just" our biology, what would we be? Magic? Whatever we mean by consciousness and decision-making, I'm sure LW members pretty much all accept that they occur within physics. The author doesn't even seem to fully grasp this point himself, because he states at the end that there "may" be at least some space for free will, independent of our biology, but that we just don't understand it yet:

Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment.

Obviously most LW reductionists are going to immediately grasp that "free will" doesn't exist in addition to our neural networks. What would that even mean? It's not "90% neural networks, 10% free will" -- the point is that the process of your neural networks operating normally on a particular decision is what we mean by "free will," at least when we care to use that concept. (If anyone thinks I've stated this incorrectly, feel free to correct me.)

But still, notwithstanding that a lot of this article sort of seems to be missing the point (largely because the author doesn't quite get how obvious the central premise really is), I'm still wrestling with how to understand some of its more specific points, within the reductionist understanding of free will. For example, Charles Whitman, the shooter who killed 13 people from the UT Tower, had written out a suicide note noting that he had recently been the "victim of many unusual and irrational thoughts" and requesting that his brain be examined. An autopsy revealed that he had a large brain tumor that had damaged his amygdala, thus causing emotional and social disturbances. Similarly, in 2000, a man named "Alex" (fake name, but real case) suddenly developed pedophilic impulses at age 40, and was eventually convicted of child molestation. Turns out he also had a brain tumor, and once it was removed, his sexual interests went back to normal. The pedophilic impulses soon returned, and the doctors discovered the tumor had grown back -- they removed it for good, and his behavior went back to normal.

Obviously people like Charles and Alex aren't "victims of their biology" anymore than the rest of us. Nobody's brain has some magic "free will" space that "exempts" the person from biology. But even under the reductionist conception of free will, it still seems like Charles and Alex are somehow "less free" than "normal" people. Even though everyone's decisions are, in some sense, determined by their past circumstances, there still seems to be a meaningful way in which Charles are Alex are less able to make decisions "for themselves" than those of us without brain tumors -- almost as if they had a tick which caused involuntary physical actions, but drawn out over time in patterns, rather than in single bursts. Or to put it differently, where the phrase "your past circumstances determine who you are when you face a choice, you are still the one that decides" holds true for most people, it seems like it doesn't hold true for them. At the very least, it seems like we would certainly be justified in judging Charles and Alex differently from people who don't suffer from brain tumors.

But if we're already committed to the reductionist understanding of free will in the first place, what does this intuition that Charles and Alex are somehow "less free" really mean? Obviously we all have biological impulses that make us more or less inclined to make certain decisions, and that might therefore impede on some ideal conception of "control" over ourselves. But are these impulses qualitatively different from biological conditions that "override" normal decision-making? Is a brain tumor pushing on your amygdala more akin to prison bars that really do inhibit your free will in a purely physical sense, or just a more intense version of genes that give you a slight disposition toward violent behavior?

My intuition is that somewhere along the line here I may be asking a "wrong question," or importing some remnant of a non-biological conception of free will into my thinking. But I can't quite pin this issue down in a way that really resolves the answer in a satisfying way, so I was hoping that some of you might be able to help me reason through this appropriately. Thoughts?

Dissolution of free will as a call to action

9 Dr_Manhattan 24 May 2011 12:41PM

Accepting determinism and the insuing dissolution of free will is often feared as something that would lead to loss of will and fatalism. Gary Drescher and Eliezer spend considerable effort explaining this as a fallacy. 

The one thing I don't remember mentioned is the opposite effect (but maybe I missed it) - if you experienced a failure to accomplish something, the free will explanation is likely to make you stop investigating the root cause, leaving it as a mystery. Once you accept determinism you know that a failure is determined by your mental algorithms, and should be much more motivated to push the investigation further, making yourself stronger.