BenAlbahari comments on The Psychological Diversity of Mankind - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (153)
Voted up because I think AS is a great example of psychological diversity. I'm curious however as to the origin of your belief that AS people are more attracted to decompartmentalization than neurotypicals are.
I've picked up some anecdotal evidence for that over the past few months. Just a week ago I was talking with one guy with AS about some ethics problems; he brought up an example where you're with 20 other people, including a baby who won't stop crying, hiding from an approaching army. Under some simplified assumptions, if the baby keeps crying, the army will find and kill all of you, and if the baby stops, they probably won't. If killing the baby is the only way to stop it, is it moral to do so? The consequentialist answer seemed obvious to both of us, even when he specified that the army would spare the baby's life but kill the rest of you. He told me that this is a characteristically autistic way of thinking about moral problems, and he's had more contact with autistic/AS people than I have (aside from being one himself), so I'm inclined to believe him. (I'm not AS myself, but I'm apparently close enough that several people at several points in my life have suspected it, but not enough to be diagnosed with it.)
Edit: He wasn't sure about torture vs. dust specks, but that seemed to be more because he didn't see how a problem involving such impossibly huge numbers of people could have any useful implications about more realistic ethical scenarios. I disagreed — the math is the same, and I think pathological cases are useful for testing the integrity and consistency of ethical theories and for testing how seriously a person takes the theory/methodology they profess to follow — but he didn't find that particular point to be relevant.
Are you sure "flummoxed" is the right word? I don't think "neurotypicals" are confused by the mathematics involved. They just dispute that the utilitarian math represents an accurate theory of ethics. Would you use the word "flummoxed" for a physicist who understands the mathematics of a theory but disputes that it says anything relevant about the real world, even if he has no alternative theory to offer?
For full disclosure, I am not convinced by utilitarian arguments at all, both in these problems you mention and in most other widely disputed ones. I understand them with perfect clarity; I just dispute that they have any relevance beyond the entertainment value of the logical exercise, and possibly propaganda value for some parties in some situations. I certainly wouldn't describe my situation as "flummoxed."
Eh, as I've argued before on LW, there are utilitarian, AS-compatible justifications for such a position: specifically, that your heroic act shuffles around the risk profiles of various activities in unpredictable ways, thus limiting the ability of people to manage risks, leading them to waste significant resources (perhaps exceeding the amount that would otherwise save more than a million lives) returning to their preferred risk profile.
The key part:
Note that this doesn't argue for a deontological prohibition, but rather, argues about the consequences of sudden deviations from social norms, without assumption of their categorical justness.
ETA: In terms of Timeless Decision Theory, you could put it this way: if people knew that bridge-walkers are drafted for deadly work on a moment's notice, it's much less likely you'd have a fat person handy to begin with. So, the way TDT calculates probabilities, the EU of pushing the fat guy off is very small on account of its low TDT-probability, eliminating the supposed utility gain.
It' isn't just about being fat while being on a bridge over trolley tracks, of course. It might be a worse world if people generally believed they should take deadly action when they see a utilitarian win.
Much less likely? That would require that such drafting be more likely on bridges than elesewhere (how often do these train accidents happen?) Also, ex ante one is more likely to find oneself one of the million saved than the one person sacrificing, so most everyone should agree to a policy that those in positions to offer incredible help be drafted.
The problem induced by pushing the fat guy off is that people don't know which zones now count as "sacrificial lamb" zones (because of the bizarreness of the deviation from social norms), except that bridges over densely-populated trolley tracks are one of them, so I think the resulting world meets this criterion.
But people are already choosing risk profiles that, under present social norms, cause them to die when near tracks that have an errant trolley coming, so it's not clear why they'd make tradeoffs (giving up other things they value) for greater near-trolley safety, and thus not clear why they'd prefer this at all.
In this case, the cost (borne by everyone in the area, not just people near tracks) is that they have to re-organize their lives around choosing routes that avoid sacrificial lamb zones. But -- by the scenario's stipulation -- people aren't currently choosing to bear the additional cost to be on the safer bridge rather than the dangerous track. (If they were, the scenario would involve millions crossing the bridge and few near the track.) What they are choosing is to bear the risk of death because of the convenience it affords.
And because the option of pushing someone off the track tells people, "Okay, you have to be a lot more risk averse to get your current level of risk", they're forced to pay more for the same safety.
On the other hand, don't forget that talk is cheap, and actions speak louder than words. I doubt that many utilitarians would be willing to follow their conclusions in practice in situations such as the fat man/trolley problem. To stress that point even further, imagine if you had to cut the fat man's throat instead of just pushing him (and feel free to increase the cost of the alternative if you think this changes the equation significantly relative to pushing). I'd bet dollars to donuts that a large majority of the contemporary genteel utilitarians couldn't bring themselves to do it, no matter how clear the calculus that -- according to them -- mandates this course of action.
This suggests to me that this "dumbfoundedness" might be in fact a consequence of more clear and far-reaching insight, not confusion. Biting moral bullets is easy in armchair discussions; what you'd actually be able to bring yourself to do is another question altogether. Therefore, when I see people who coolly affirm the logical conclusions of their favored formal ethical theories even when they run afoul of common folks' intuition, I have to ask if they are really guided by logic to an exceptional degree in their lives -- or do they simply fail to see, out of sheer mental short-sightedness, how remote their armchair theorizing is from what they'd be willing and capable to do if they, God forbid, actually found themselves in some such situation.
(This is not the reason why I don't see any validity in utilitarianism; that would be a topic for another discussion altogether. The point here is that logical consistency in ethical armchair discussions could in fact be a consequence of myopia, not logical clear-sightedness.)
I don't think this statement is logically consistent. Unless you're restrained by some outside force, if you don't do something, that means you didn't want to do it. You might hypothesize that you would have wanted it within some counterfactual scenario, but given the actual circumstances, you didn't want it.
The only way out of this is if we dispense with the concept of humans as individual agents altogether, and analyze various modules, circuits, and states in each single human brain as distinct entities that might be struggling against each other. This might make sense, but it breaks down the models of pretty much all standard ethical theories, utilitarian and otherwise, which invariably treat humans as unified individuals.
But regardless of that, do you accept the possibility that at least in some cases, bullet-biting on moral questions might be the consequence of a failure of imagination, not exceptional logical insight?
Interesting. This implies that there are actually two ways of interpreting such moral dilemmas: either as A) "what would you actually do in this situation", or B) "what would be the right thing to do in this situation, regardless of whether you'd actually be capable of doing it".
I've always interpreted the questions as being of type B, but the way you write suggests you're thinking of them as being type A. I wonder how much of the disagreement relating to these questions is caused by differing interpretations.
It's more complicated than that. Most people would say that there are imaginable situations where a certain course of action is right, but they'd be strongly tempted to act differently out of base motives. For example, if you ask a typical person whether it would be right to gain a large amount of money by some sort of cheating, assuming you know for sure there won't be any negative consequences, they'll immediately understand that the question is about what's normatively right, not how they'd be tempted to act. Some very sincere people would probably admit that they might yield to the temptation, even though they consider it wrong.
Now, imagine you're introduced to someone who had the opportunity to cheat a business partner for a million dollars with zero risk of repercussions, but flat-out refused to do so out of sheer moral fiber. You'll immediately perceive this person as trustworthy and desirable to deal with -- a man who acts according to high principles, not base passion and instinct. In contrast, you'd shun and despise him if you heard he'd acted otherwise.
However, let's now compare that with the extreme fat man problem (where you'd have to cut the fat man's throat to avert some greater loss of life). Imagine you're introduced to someone who was faced with it and who slit the fat man's throat without blinking. Would you feel warm and fuzzy about this person? Would any of the bullet-biting utilitarians fail to be profoundly creeped out just by the knowledge that they are standing next to someone who actually acted like that -- even though they'd all defend (nay, prescribe!) his course of action relentlessly when philosophizing? Moreover, I would again bet dollars to donuts that our genteel utilitarians would be much less creeped out by someone who couldn't bring himself to butcher the fat man.
When I think about this, I honestly can't but detect severe short-sightedness in moral bullet-biters.
I can believe that a neurotypical person would be more likely to imagine themselves doing the actual killing, while someone on the AS would be more likely to stay with the abstract problem.
I was going to dispute your use of "flummoxed" as well but then I realized my position on normative ethics is basically an extended defense of moral dumbfoundedness and decided that I wouldn't be the best person to make that argument.
I think anyone who is biting bullets and defending rational principles broadly applied is just more comfortable dropping intuitions (or holds them less strongly) and less comfortable with logical inconsistency (sound like anyone you know?). But I don't think that makes their claims about morality any truer than the dumbfounded. I disagree that the right answer to inconsistent intuitions is just deciding to pick some intuitions and ignore them.
You can keep all of them if you're okay saying that sometimes there are only immoral choices (or at least no moral ones) and that sometimes the action we ought to take is under-determined by our moral intuitions.
I wonder if the higher rate of consequentialists here relative to the general population or the population of ethicists might be explained solely by differing rates of AS plus self-selecting consequentialists here because they have found kindred hearts.
Have we ever polled for demographics on neurotypicality?
This is an interesting thread. Admittedly, I've often thought to myself when reading LW posts: "this post was clearly written by someone with AS". If people with AS are drawn to sites like this, maybe that, in part, explains why there seems to be many more men here than women. I wonder if the male:female LW ratio is similar to the male:female AS ratio in the general population.
Autism in general affects four times as many men than women in the general population; but I've noticed that a surprisingly high proportion of the autistic "public figures" - given that ratio - are women. Temple Grandin, for instance, may be the most famous person with autism around; and a majority of the autism bloggers I've run across are female. I don't know why this is.
Does this statistic refer only to severe cases of autism that are likely to be noticed and diagnosed whenever they occur, or also to the milder, high-functioning autism spectrum disorders? Because if the latter, I would expect that mildly autistic men are much more likely to be noticed as weird and dysfunctional than women, so this might account for at least a part of the discrepancy in the rate of diagnosis.
The explanation for the greater public prominence (and presumably social acumen) of female autistics is probably similar. In most situations, it's probably harder for autistic men than women to avoid coming off as creepy or ridiculous.
Are the words "women" and "men" reversed in your opening sentence?
Does "autism bloggers" mean "people who blog specifically about autism"?
If so, it might be instructive to check how many bloggers in other subjects also happen to have autism. It might be dificult to verify but the blogosphere is large enough to dig up a usefully-sized sample and disentangle to some degree the autism-blogging link.
This seems like exactly the sort of attitude that would disappear in any reasonable preference extrapolation algorithm.
I don't have evidence for that proposition, but I wanted to (shamelessly) point out that attraction to decompartmentalization can be phrased as a willingness to go from Level 1 to Level 2 in my hierarchy. That is, to go from understanding domains independently, to checking for global consistency and multi-directional implication across them.