Roko comments on The Psychological Diversity of Mankind - Less Wrong

79 Post author: Kaj_Sotala 09 May 2010 05:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread. Show more comments above.

Comment deleted 09 May 2010 04:24:00PM *  [-]
Comment author: ata 09 May 2010 04:55:04PM *  8 points [-]

I've picked up some anecdotal evidence for that over the past few months. Just a week ago I was talking with one guy with AS about some ethics problems; he brought up an example where you're with 20 other people, including a baby who won't stop crying, hiding from an approaching army. Under some simplified assumptions, if the baby keeps crying, the army will find and kill all of you, and if the baby stops, they probably won't. If killing the baby is the only way to stop it, is it moral to do so? The consequentialist answer seemed obvious to both of us, even when he specified that the army would spare the baby's life but kill the rest of you. He told me that this is a characteristically autistic way of thinking about moral problems, and he's had more contact with autistic/AS people than I have (aside from being one himself), so I'm inclined to believe him. (I'm not AS myself, but I'm apparently close enough that several people at several points in my life have suspected it, but not enough to be diagnosed with it.)

Edit: He wasn't sure about torture vs. dust specks, but that seemed to be more because he didn't see how a problem involving such impossibly huge numbers of people could have any useful implications about more realistic ethical scenarios. I disagreed — the math is the same, and I think pathological cases are useful for testing the integrity and consistency of ethical theories and for testing how seriously a person takes the theory/methodology they profess to follow — but he didn't find that particular point to be relevant.

Comment deleted 09 May 2010 10:42:29PM [-]
Comment author: Vladimir_M 09 May 2010 11:03:55PM *  11 points [-]

Are you sure "flummoxed" is the right word? I don't think "neurotypicals" are confused by the mathematics involved. They just dispute that the utilitarian math represents an accurate theory of ethics. Would you use the word "flummoxed" for a physicist who understands the mathematics of a theory but disputes that it says anything relevant about the real world, even if he has no alternative theory to offer?

For full disclosure, I am not convinced by utilitarian arguments at all, both in these problems you mention and in most other widely disputed ones. I understand them with perfect clarity; I just dispute that they have any relevance beyond the entertainment value of the logical exercise, and possibly propaganda value for some parties in some situations. I certainly wouldn't describe my situation as "flummoxed."

Comment deleted 10 May 2010 01:05:08AM *  [-]
Comment author: SilasBarta 10 May 2010 02:14:38AM *  10 points [-]

Many neurotypicals I have spoken to will take really extreme positions on the fat man trolley problem, saying that they wouldn't push the fat man off the bridge even if a million people were on the trolley.

Eh, as I've argued before on LW, there are utilitarian, AS-compatible justifications for such a position: specifically, that your heroic act shuffles around the risk profiles of various activities in unpredictable ways, thus limiting the ability of people to manage risks, leading them to waste significant resources (perhaps exceeding the amount that would otherwise save more than a million lives) returning to their preferred risk profile.

The key part:

By intervening to push someone onto the track, you suddenly and unpredictably shift around the causal structure associated with danger in the world, on top of saving a few lives. Now, people have to worry about more heros drafting sacrificial lambs "like that one guy did a few months ago" and have to go to greater lengths to get the same level of risk.

In other words, all the "prediction difficulty" costs associated with randomly changing the "rules of the game" apply. Just as it's costly to make people keep updating their knowledge of what's okay and what isn't, it's costly to make people update their knowledge of what's risky and what isn't (and to less efficient regimes, no less).

Note that this doesn't argue for a deontological prohibition, but rather, argues about the consequences of sudden deviations from social norms, without assumption of their categorical justness.

ETA: In terms of Timeless Decision Theory, you could put it this way: if people knew that bridge-walkers are drafted for deadly work on a moment's notice, it's much less likely you'd have a fat person handy to begin with. So, the way TDT calculates probabilities, the EU of pushing the fat guy off is very small on account of its low TDT-probability, eliminating the supposed utility gain.

Comment author: NancyLebovitz 10 May 2010 08:05:49PM 7 points [-]

It' isn't just about being fat while being on a bridge over trolley tracks, of course. It might be a worse world if people generally believed they should take deadly action when they see a utilitarian win.

Comment author: CarlShulman 10 May 2010 06:27:34PM 3 points [-]

Much less likely? That would require that such drafting be more likely on bridges than elesewhere (how often do these train accidents happen?) Also, ex ante one is more likely to find oneself one of the million saved than the one person sacrificing, so most everyone should agree to a policy that those in positions to offer incredible help be drafted.

Comment author: SilasBarta 10 May 2010 07:10:13PM 3 points [-]

Much less likely? That would require that such drafting be more likely on bridges than elesewhere

The problem induced by pushing the fat guy off is that people don't know which zones now count as "sacrificial lamb" zones (because of the bizarreness of the deviation from social norms), except that bridges over densely-populated trolley tracks are one of them, so I think the resulting world meets this criterion.

one is more likely to find oneself one of the million saved than the one person sacrificing, so most everyone should agree to a policy that those in positions to offer incredible help be drafted.

But people are already choosing risk profiles that, under present social norms, cause them to die when near tracks that have an errant trolley coming, so it's not clear why they'd make tradeoffs (giving up other things they value) for greater near-trolley safety, and thus not clear why they'd prefer this at all.

In this case, the cost (borne by everyone in the area, not just people near tracks) is that they have to re-organize their lives around choosing routes that avoid sacrificial lamb zones. But -- by the scenario's stipulation -- people aren't currently choosing to bear the additional cost to be on the safer bridge rather than the dangerous track. (If they were, the scenario would involve millions crossing the bridge and few near the track.) What they are choosing is to bear the risk of death because of the convenience it affords.

And because the option of pushing someone off the track tells people, "Okay, you have to be a lot more risk averse to get your current level of risk", they're forced to pay more for the same safety.

Comment author: Vladimir_M 10 May 2010 02:21:42AM *  14 points [-]

On the other hand, don't forget that talk is cheap, and actions speak louder than words. I doubt that many utilitarians would be willing to follow their conclusions in practice in situations such as the fat man/trolley problem. To stress that point even further, imagine if you had to cut the fat man's throat instead of just pushing him (and feel free to increase the cost of the alternative if you think this changes the equation significantly relative to pushing). I'd bet dollars to donuts that a large majority of the contemporary genteel utilitarians couldn't bring themselves to do it, no matter how clear the calculus that -- according to them -- mandates this course of action.

This suggests to me that this "dumbfoundedness" might be in fact a consequence of more clear and far-reaching insight, not confusion. Biting moral bullets is easy in armchair discussions; what you'd actually be able to bring yourself to do is another question altogether. Therefore, when I see people who coolly affirm the logical conclusions of their favored formal ethical theories even when they run afoul of common folks' intuition, I have to ask if they are really guided by logic to an exceptional degree in their lives -- or do they simply fail to see, out of sheer mental short-sightedness, how remote their armchair theorizing is from what they'd be willing and capable to do if they, God forbid, actually found themselves in some such situation.

(This is not the reason why I don't see any validity in utilitarianism; that would be a topic for another discussion altogether. The point here is that logical consistency in ethical armchair discussions could in fact be a consequence of myopia, not logical clear-sightedness.)

Comment deleted 10 May 2010 02:34:45AM [-]
Comment author: Vladimir_M 10 May 2010 02:54:52AM *  4 points [-]

You're allowed to say "X is the action I would want to take, but I wouldn't be able to"

I don't think this statement is logically consistent. Unless you're restrained by some outside force, if you don't do something, that means you didn't want to do it. You might hypothesize that you would have wanted it within some counterfactual scenario, but given the actual circumstances, you didn't want it.

The only way out of this is if we dispense with the concept of humans as individual agents altogether, and analyze various modules, circuits, and states in each single human brain as distinct entities that might be struggling against each other. This might make sense, but it breaks down the models of pretty much all standard ethical theories, utilitarian and otherwise, which invariably treat humans as unified individuals.

But regardless of that, do you accept the possibility that at least in some cases, bullet-biting on moral questions might be the consequence of a failure of imagination, not exceptional logical insight?

Comment author: ata 10 May 2010 07:32:57AM *  3 points [-]

I don't think this statement is logically consistent. Unless you're restrained by some outside force, if you don't do something, that means you didn't want to do it.

It's not always that simple. It would be inconsistent if our actions could be reduced to a simple utility function and we consistently used the word (and emotion) "want" to refer to actions that maximize that utility function, but neither of those are the case, because we're not intelligently-designed optimization processes. Our brains don't act under a single unified goal system, and very often the part of us that says it wants to do x, or the part that believes it wants to do x, or the part that would be happy if it could do x, or the part that feels bad if it doesn't do x — any of the parts where it feels like "wanting" rather than "doing" — isn't always the part that makes the decision. (In fact, in a direct causal sense, I'd say it's not the part that makes the decision, period. Sometimes it just seems like they're the same when they're properly synchronized.) Neither is the part that makes moral judgments on one's own actions and on other's actions, and so on.

Have you read any of the discussions of akrasia here? That's essentially shorthand for what we're talking about here (wanting to do something but not doing it), and if you are willing to discuss it on human terms — in terms of what humans actually mean when they say "want" rather than what a single-minded decision-theoretic reasoner would mean by it* — then such discussions can be quite fruitful, and not logically inconsistent or meaningless at all.

* If such an agent would say it at all, that is. It could be taken as a mistranslation, in the same sense that Eliezer says translating any of the Babyeaters' words about their own decisions as "right" would be a mistranslation. If a perfect decision-theoretic agent's utility function specifies some action, then by definition, it will automatically pursue that; there's no room for any "wanting" there, just deciding and doing. Indeed, the very fact that we have different words for "want" and "pursue" reflects the reality that we can and very frequently do one but not the other.

Comment author: Vladimir_M 10 May 2010 07:39:19PM *  2 points [-]

ata:

Have you read any of the discussions of akrasia here? That's essentially shorthand for what we're talking about here (wanting to do something but not doing it), and if you are willing to discuss it on human terms — in terms of what humans actually mean when they say "want" rather than what a single-minded decision-theoretic reasoner would mean by it* — then such discussions can be quite fruitful, and not logically inconsistent or meaningless at all.

Yes, I've read lots of stuff written about akrasia on this blog. This would be a topic for a whole separate discussion, but to put it as briefly as possible, in general I'm highly suspicious of such concepts. I view them through what Bryan Caplan calls the "Gun-to-the-Head Test" (I had actually come up with the exact same argument independently before I read about it from Caplan):

Can we change a person's behavior purely by changing their incentives? If we can, it follows that the person was able to act differently all along, but preferred not to; their condition is a matter of preference, not constraint. I will refer to this as the "Gun-to-the-Head Test." If suddenly pointing a gun at alcoholics induces them to stop drinking, then evidently sober behavior was in their choice set all along.

Note how different this is from people who have no control of their behavior even under this test. A Parkinson patient can't stop shaking his hands, and a person with normal nerves can't refrain from the knee jerk when struck into the patellar ligament, no matter what you threaten them with.

Ultimately, I believe that people engage in akrasia and "addictive" behaviors because they sincerely want it. Procrastination and substance abuse are fun and pleasant, and may well be worth a large cost for those sufficiently fond of them. And if these people can subsequently claim that their socially disapproved behaviors were somehow against their will and this way lower their cost by assuaging their reputational consequences -- well, no wonder that such excuses are popular. Saying that you would "want" to avoid procrastination is just ritual signaling behavior, just like smokers saying that they "want" to quit.

I should add that this is a complex topic, to which this brief post doesn't do justice, but this does summarize my view on the matter.

Comment author: SilasBarta 10 May 2010 03:13:05AM *  1 point [-]

It's not that much of a difference. Such a model could still accept that humans are unified individuals, but also attached to parts (defined as not the relevant part of the human) that interfere with the human's actions.

Roko's alternative is just to say, "X is that action that I would attempt; hardware inextricably connected to me would also stop me from doing X."

Of course, that does run into problems like, "So you agree that you're running on corrupted hardware that stops you from doing what you believe is morally right -- why should I trust you, then?

Comment author: Mass_Driver 10 May 2010 03:38:06AM 9 points [-]

This might make sense, but it breaks down the models of pretty much all standard ethical theories, utilitarian and otherwise, which invariably treat humans as unified individuals.

Except for very narrow definitions of "standard," this is just incorrect. Plato, Hume, Kant, and John Stuart Mill all understood and wrote about the difference between what they thought of as the rational or refined will and the more emotional appetite. Likewise Maimonides, St. Augustine, Epictetus, and a 16th century Taoist scholar whose name I can look up for you if it's actually important. In fact, an enormous part of standard ethics deals with the divergence between what we say is right and what we actually do, and tries to identify ways to help us actually do what we say is right.

The blanket assertion that anything you do without being physically restrained is what you wanted to do under the circumstances is a creature of 20th century free-market economics. While it can be part of a self-consistent moral philosophy (e.g. Ayn Rand's Objectivism), it's hardly a litmus test for sound ethical thinking. On the contrary, we should be deeply suspicious of any moral theory that tells us that whatever we do must be what we wanted to do, because it conveniently justifies a set of actions that we (apparently) find quite easy to carry out. What is easy is not always right.

Comment author: fraa 17 June 2010 08:12:35AM 1 point [-]

I am a bit confused OTOH why non-ADHD people (without akrasia, a term I just learned here on this webssite) find such questions interesting at all. To me, no matter what "system of morals" you may have, it's mostly useless thinking, because it's not like what I do depends that much on what I actually want to do, in my self-awareness.

Comment author: Blueberry 17 June 2010 06:21:09PM 0 points [-]

it's not like what I do depends that much on what I actually want to do

So true. That's what akrasia is. But I'd be surprised if there were people who didn't experience that at least a little bit.

Comment author: Kaj_Sotala 10 May 2010 02:32:33AM *  9 points [-]

Interesting. This implies that there are actually two ways of interpreting such moral dilemmas: either as A) "what would you actually do in this situation", or B) "what would be the right thing to do in this situation, regardless of whether you'd actually be capable of doing it".

I've always interpreted the questions as being of type B, but the way you write suggests you're thinking of them as being type A. I wonder how much of the disagreement relating to these questions is caused by differing interpretations.

Comment author: Vladimir_M 10 May 2010 03:33:15AM *  18 points [-]

It's more complicated than that. Most people would say that there are imaginable situations where a certain course of action is right, but they'd be strongly tempted to act differently out of base motives. For example, if you ask a typical person whether it would be right to gain a large amount of money by some sort of cheating, assuming you know for sure there won't be any negative consequences, they'll immediately understand that the question is about what's normatively right, not how they'd be tempted to act. Some very sincere people would probably admit that they might yield to the temptation, even though they consider it wrong.

Now, imagine you're introduced to someone who had the opportunity to cheat a business partner for a million dollars with zero risk of repercussions, but flat-out refused to do so out of sheer moral fiber. You'll immediately perceive this person as trustworthy and desirable to deal with -- a man who acts according to high principles, not base passion and instinct. In contrast, you'd shun and despise him if you heard he'd acted otherwise.

However, let's now compare that with the extreme fat man problem (where you'd have to cut the fat man's throat to avert some greater loss of life). Imagine you're introduced to someone who was faced with it and who slit the fat man's throat without blinking. Would you feel warm and fuzzy about this person? Would any of the bullet-biting utilitarians fail to be profoundly creeped out just by the knowledge that they are standing next to someone who actually acted like that -- even though they'd all defend (nay, prescribe!) his course of action relentlessly when philosophizing? Moreover, I would again bet dollars to donuts that our genteel utilitarians would be much less creeped out by someone who couldn't bring himself to butcher the fat man.

When I think about this, I honestly can't but detect severe short-sightedness in moral bullet-biters.

Comment author: MugaSofer 22 January 2013 10:28:23AM 0 points [-]

Imagine you're introduced to someone who was faced with it and who slit the fat man's throat without blinking. Would you feel warm and fuzzy about this person?

I'm not sure "warm and fuzzy" is the right term, but ... I would feel a certain respect, and of course update my probability that they will fail to take the correct action out of bias or akrasia. And my probability that they will kill me.

Would you be creeped out by someone who cheerfully admitted they would kill you if you turned evil? I mean mind-control type evil? Because in fiction at least that's treated as a good thing, but still creepy.

(I think the creepiness is the fact that they can and will kill people, and there's the ever-present worry they might mistake you for a risk.)

Comment author: NancyLebovitz 10 May 2010 12:03:31PM 6 points [-]

I can believe that a neurotypical person would be more likely to imagine themselves doing the actual killing, while someone on the AS would be more likely to stay with the abstract problem.

Comment author: Jack 10 May 2010 01:33:02AM 4 points [-]

I was going to dispute your use of "flummoxed" as well but then I realized my position on normative ethics is basically an extended defense of moral dumbfoundedness and decided that I wouldn't be the best person to make that argument.

I think anyone who is biting bullets and defending rational principles broadly applied is just more comfortable dropping intuitions (or holds them less strongly) and less comfortable with logical inconsistency (sound like anyone you know?). But I don't think that makes their claims about morality any truer than the dumbfounded. I disagree that the right answer to inconsistent intuitions is just deciding to pick some intuitions and ignore them.

Comment deleted 10 May 2010 02:01:51AM [-]
Comment author: Jack 10 May 2010 02:16:36AM *  3 points [-]

You can keep all of them if you're okay saying that sometimes there are only immoral choices (or at least no moral ones) and that sometimes the action we ought to take is under-determined by our moral intuitions.

Comment author: MugaSofer 22 January 2013 11:13:27AM -1 points [-]

... so which is less immoral?

Comment author: Vladimir_M 10 May 2010 03:07:16AM 0 points [-]

Yes, why should we assume that these difficult ethical conundrums have some sort of "right answer" at all? Why would asking about the "right choice" in trolley and similar problems necessarily have to have any more sense than asking about the "correct value" of 0^0?

Comment author: orthonormal 16 May 2010 12:03:24AM *  3 points [-]

That raises an obvious question: what do you actually do if you find yourself in a Sophie's choice, especially if the result of the null or default choice is more monstrous to you than the results of the other choices? Refusing to consider a class of decision theory problems is tantamount to precommitting to an unconsidered answer should one of them arise.

Of course, in most cases, people actually do seem to consider horrific choices once they're actually faced with one; I therefore conclude that the popular response of refusing to make an analysis of such problems is more about signaling than anything else.

Comment author: Jack 09 May 2010 10:54:35PM *  3 points [-]

I wonder if the higher rate of consequentialists here relative to the general population or the population of ethicists might be explained solely by differing rates of AS plus self-selecting consequentialists here because they have found kindred hearts.

Have we ever polled for demographics on neurotypicality?

Comment deleted 10 May 2010 12:59:35AM *  [-]
Comment author: neq1 10 May 2010 03:27:58PM 3 points [-]

This is an interesting thread. Admittedly, I've often thought to myself when reading LW posts: "this post was clearly written by someone with AS". If people with AS are drawn to sites like this, maybe that, in part, explains why there seems to be many more men here than women. I wonder if the male:female LW ratio is similar to the male:female AS ratio in the general population.

Comment author: Alicorn 10 May 2010 04:15:04PM *  3 points [-]

Autism in general affects four times as many men than women in the general population; but I've noticed that a surprisingly high proportion of the autistic "public figures" - given that ratio - are women. Temple Grandin, for instance, may be the most famous person with autism around; and a majority of the autism bloggers I've run across are female. I don't know why this is.

Comment author: Vladimir_M 10 May 2010 07:09:31PM *  7 points [-]

Autism in general affects four times as many men than women in the general population;

Does this statistic refer only to severe cases of autism that are likely to be noticed and diagnosed whenever they occur, or also to the milder, high-functioning autism spectrum disorders? Because if the latter, I would expect that mildly autistic men are much more likely to be noticed as weird and dysfunctional than women, so this might account for at least a part of the discrepancy in the rate of diagnosis.

The explanation for the greater public prominence (and presumably social acumen) of female autistics is probably similar. In most situations, it's probably harder for autistic men than women to avoid coming off as creepy or ridiculous.

Comment author: MC_Escherichia 10 May 2010 04:24:43PM 1 point [-]

Are the words "women" and "men" reversed in your opening sentence?

Comment author: Alicorn 10 May 2010 04:28:14PM 0 points [-]

Yes, thank you, fixing that now.

Comment author: Nanani 11 May 2010 02:36:14AM 0 points [-]

Does "autism bloggers" mean "people who blog specifically about autism"?

If so, it might be instructive to check how many bloggers in other subjects also happen to have autism. It might be dificult to verify but the blogosphere is large enough to dig up a usefully-sized sample and disentangle to some degree the autism-blogging link.

Comment author: Alicorn 11 May 2010 02:44:54AM 0 points [-]

Yes, that's what I mean.

Comment author: steven0461 10 May 2010 09:25:20PM 2 points [-]

e.g. think that destruction of the world is OK, but be horrified by the death of a particular person

This seems like exactly the sort of attitude that would disappear in any reasonable preference extrapolation algorithm.