I think most people reading this blog agree that the folk idea of a "self" is bunk. There is no ghost or homunculus unconstrained by physical reality controlling our body, nor are our future or past actions understandable or predictable by our current self.

I assume this seems unimportant to you, it seemed unimportant to me. But when I was trying to steelman Harris' case for harping on so much about free will, I think I accidentally managed to convince myself I was wrong in thinking it so irrelevant a matter.

Among philosophers, a guy like Chalmers or Nagle is actually fairly niche and often cited just for the association. Which are the philosophers’ people actually read, say the one that the vast majority of politicians have read? Well, for example, John Rawls.

Remember Rawls' "original condition," thought experiment? What most people take to be the underpinning of our commonly agreed upon political philosophy?... yeah, you're right, that makes no sense unless you assume you are a magical ghost that's just so happening to control a human body.

And suddenly you realize that a very very large swath of philosophy and thought in general, both modern and historical, might be laboring under this completely nonsensical model of the mind/self that we simply fail to notice.


Since I wrote that, I kept thinking of more and more situations where laboring under this model seems to cause a great amount of confusion.

I think one is people thinking about algorithms used to control things like cars making decisions of ethical importance. The very idea of an autopilot having to make an ethical decision is kind of silly, given how rarely such a situation occurs on the road in which someone must be harmed. But let me grant that there are situations in which an autopilot might be forced to make a split-second decision with ethical import.

Maybe not quite a trolley problem, something more realistic, like breaking quickly enough to endanger the driver or bumping slightly into the car in front, thus damaging it and putting its passengers in slight danger. Or deciding whether to veer off-road into a ditch or hit a child that jumped in the middle of the street. You get the point, take your pick of a semi-reasonable thought experiment where one way or another the autopilot is likely to cause harm to someone


The "real" answer to this question is that nobody has the ability to give a general answer. The best you can do is observe the autopilot's behavior in a given instance.

Note: assume I'm talking about any popular self-driving software + hardware suite you'd like

The autopilot follows simple rules with a focus on getting to a destination without crashing or breaking any laws. Under any conditions of risk (e.g. road too wet, mechanical detect) it delegates the wheel and option to keep going to a human driver, or outright blocks the car until help can arrive.

When it comes to split-second decisions where risk is involved, there is no "ethics module" that kicks in to evaluate the risk. If the car can't be safely stopped, the autopilot will try to unsafely stop it. How it does this is a generative rule stemming out of its programming for more generic situations. Since the amount of specific dangerous situations is infinite, you can't really have specific rules for all (or any) of them.


People gasp at this in horror, "well how can you entrust a thing to take away a human's life if it doesn't reason about ethics".

It's at this point that the illusory self comes in. Obviously, with a moment of clear thinking, it's obvious that a human is no different from an autopilot in these circumstances.

Your system of ethics, education, feelings about those in the car or about the people on the road will not come into play when making a split-millisecond decision. What will come into play is... well, a complex interaction of fast-acting sub-systems which we don't understand.

At no point will you ponder the ethics of the situation, who to endanger and who not to, while trying to stop a car going 100km/h in the span of less than a second.

People thinking about wanna-be trolley problems in traffic and assuming they know how they would react are just confused about their own mind, thinking their actions are going to be consistent with the narrative they are living at a given moment, under any circumstances.

What I find interesting in this scenario though, besides the number of people being fooled by it, is that it doesn't have to do with self-control or preference predictions. It's simply a scenario that's too quick for any "self like" thing to be in control, yet for some reason, we are convinced that, were we to be put in such a situation, we could act from a place that is, or at least feels similar to, the current self.


One interesting point here though is that an autopilot could in theory aspire to make such decisions while thinking ethically, even if a human can't, since compute might allow for it.

Currently, we hold humans liable if they make a mistake on the road, but only to the extent that we take away their right to drive, which is the correct utilitarian response (and too seldom does it happen). But nobody goes to prison for killing a pedestrian outside of situations where the accident was due to previous acts that conscious decisions could have avoided (getting in the car drunk, getting in the car without a license).

However, car companies could be held liable for killing pedestrians, and the liability could be in the dozens or hundreds of millions, since, unlike people, they can afford this. That would lead to a race to get a 100% safe autopilot and we might expect that autopilot to do superhuman things such as reason morally under split-millisecond constraints and take the correct decision.

But right now I think we are far away from that, and most people still oppose autopilots on grounds that they are inferior to people. This is what I'm hoping to help debunk here.

New Comment
5 comments, sorted by Click to highlight new comments since:
[-]gjm110

I think the following two propositions are different. 1. "Quick decisions are not made by something 'self-like'". 2. "Quick decisions are made in a way that has nothing to do with your ethics." #1 is probably true, at least if "quick" is quick enough and "self-like" is narrow enough. But #2 doesn't follow from it. In the sort of situation you describe, for sure you won't be pondering the ethics of the situation -- but that doesn't mean that the lower-level systems that decide your actions have nothing to do with your ethics.

I don't know for sure whether they do. (I wonder whether there's been research on this.) But the following (related) things seem at least plausible to me. I am not claiming that any specific one is true, only that they all seem plausible, and that to whatever extent they're true we should expect rapid decisions and ethics to be related.

  • Part of what "being good" is is having the bits of your brain that generate plans be less inclined to generate plans that involve harming people.
  • Part of what "being good" is is having what happens to other people be more salient, relative to what happens to you, in your internal plan-generating-and-assessing machinery.
    • [clarification: I don't mean that being good means having other people matter more than yourself, I mean that since almost all of us notice our own interests much more readily than others, noticing others' interests more generally goes along with being more "good".]
  • If you try to make yourself a better person, part of what you're trying to do is to retrain your mental planning machinery to pay more attention to other people.
  • It is in fact possible to do this: higher-level bits of you have some ability to reshape lower-level bits. (A trained artist sees the world differently from muggles.)
  • Even if in fact it's not possible to do this, people for whom what happens to other people is more salient tend to become people who are "good" because when they reflect on what they care about, that's what they see.
  • The more you care about other people's welfare, the more the (somewhat conscious) process of learning how to drive (or how to do other things in which you might sometimes have quick decisions to make that affect others) will be directed towards not harming others.

As a separate but related point, avoiding accidents in which other people get hurt is not just a matter of what you do at the last moment. It's also about what situations you, er, steer yourself towards on longer timescales. For instance, caring about the welfare of the driver in front of you will make you less inclined to drive very close behind them (because doing so will make them uncomfortable and may put them at more risk if something unexpected happens), which will bias those split-second decisions away from ones where more of the options involve colliding with the vehicle in front of you. And that is absolutely the sort of thing that autopilot systems can have different "opinions" about.

There was actually an example of that just recently. Some bit of Tesla's automated driving stuff (I'm not sure whether it's the ill-named "full self-driving", or something that's present on all their cars; I think the former) has three settings called something like "cautious", "normal", and "assertive". If you select "assertive", then when approaching a stop sign the car will not necessarily attempt to stop; rather, if it doesn't detect other vehicles nearby it will slow down but keep going past the sign. It turns out that this is illegal (in the US; probably in other jurisdictions too, but the US is the one I heard about) and Tesla have just announced a recall[1] of tens of thousands of vehicles to make it not happen any more. Anyway, since Tesla's ability to detect other vehicles is unfortunately less than perfect, this is a "strategic" choice that when made makes "tactical" emergency decisions more likely and more likely to involve harm to other people.

[1] Although this is the word they use, I think all it means in practice is that they're notifying people with those cars and doing an over-the-air update to disable this behaviour.

[-][anonymous]20

This is besides the point, but your example about Tesla rolling back their autopilot assertiveness to comply with the law made me realize a hidden risk of automation: it makes laws actually enforceable. The perhaps most important civilizational defense against bureaucracy clogging up its arteries, common-sense non-compliance, is being removed. This is a terrible precedent and made self-driving technology much less appealing to me all of a sudden.

There are some sense here that tend to be entangled that can be hard to tear appart. This might be shooting of in a tangent.

In Magic the Gathering colors its easy to have positive associations with the color white, but white does not mean good. The properties and skills described in the parent post are prosocial and it is sensible to have system that places great value in these things. But white can also be evil and things that white calls evil are not neccesarily so.

In Dungeons and Dragons one might play a character that is of the aligment Evil. But then every character is the hero of their story and as the player one has to wonder what kind of psychological principles goes into how the character chooses. To me essentially an evil character is living for themselfs. If their win condition is also another beings lose condition they choose to win.

In Babylon 5 the more shady side has the theme of asking their negotiating partners "What do you want?" and then either pointing out a course of action that gets then that or offering a deal that gets them that. On its face this seems neutral and even like a definition of moral contemplation. However this gets antagonistic shades in that often the cost of the deal is a great destruction or betrayal. And when not offering deals but pointing out a way to get the outcome acting in that way has great externalities for other actors. The logic goes something like "the thing that you want it possible and in your power", "You do not choose to receive that outcome", "So do you actually want the thing or not?","Probably not because its turned down". This can kind of bait universe occupants to form a more narrow will than they otherwise would "Yes, I actually do want the thing and will bite whatever bullet".

In this kind of "black morality" there is a corresponding skill of "being effective" in being more aware whether your actions are furthering your interest in contrast of what other ask and care about. If you know you get what you want and don't know what your effect is on others Black is perfectly happy ot be effective. In contrast if White knows that others are not harmed and doesn't know what they want out of life White is perfectly happy to be safe and inoffensive. Ofcourse with increased awereness less details are left to ignorance and more are under the influence of concious choice.

In Upload they live in a world where automobiles have a setting of "protect occupant"or "protect pedestrian". I think making this choice is good but I don't know whether one option or the other can be condenmend. In particular I am not sure it is proper to try to make people choose "protect others". Like forbidding self-preservation is not a good idea. But people should trade their preservation against other goods. But it should be their choice.

But yeaht the poitn was that "good citizen" is separate from "good person" and moral progress can look like deconstructing bits where you are by ignorance or accident prosocial. Or rather than being a balance between self and others, caring-about-self and caring-about-others can be skills that can be strong together. But suppressing or dismissing caring-about-self is seldom productive. Its more that the opportunity cost of skipping on growing self-aweress is usually sensible in furthering the more rare caring-about-others.

I feel like people do get into actual trouble when decisions indirectly kill people. Just now I was watching news coverage about manslaughter charges because a person helped a drunk person regain access to their car keys. That seems even more removed than charging for the DUI.

I suspect that producing "lethally dangerous cars" is not so vividly in the public concioucness as safety precautions are taken so seriously that safety recalls are so mundane that it doesn't register as a special exceptional event. That is partly why such things would be so scandalous to encounter because they are so preventable and there are so many systems for early detection.

I do think that there are different level so ethical atunement. You can be oblivious and then just react to things as they happen. But some people can appriciate the meaning of driving slowly in a school district even before any child jumps into the middle of the road. Part of the driving licence training I have been throught was to ponder about how it would be an error state to try to suddenly swerve to try to not hit a squirrel with the high probablity of putting human lives in danger. I can appriciate that in an actual ive situation I can't benefit from those deliberations to the full extent that I think over in this theorethical presetting but I do think it makes a difference how I actually would drive (even if it just means slightly smoother drivelines and .2-.3 second reaction time differences).

I can totally see a driver doing what they can in panic mode do what they can and then after the fact look back and how their actual driving was detached from the story they are telling themselfs. But at the same time I can see people willing to later standup for their split second decision to run over property over life or some other such thing. I suspect that there is also a difference in the level of annoyement when a "close by" situation happens. When your thought process actually went to thinking about splashing yourself against the wall to increase safety you tend to be a little more hotheaded about the dangerous situation being formed rather if you just panic deer-headlighted your way through it. And even legally the whole concept of "negligence" can be thought about whether a proper level of awareness and care was rendered. You can be drunk in your house but in a car you have a duty to be perceptive (and dexterous and all kinds of stuff) and you if you can't be perceptive because you are drunk you should exit the vechicle.

But such things try to target the danger of the conduct rather than the actual bad fallouts. Thus whether the ultimate outcome is happy or lethal can't really be a factor in the forbiddeness or badness of the act. Thus people being caught texting and driving and receiving speedtickets tend to think the regulations are unneccarily strict and punishing while when actually something happens the outcomes can seem light.

It is the probably the case that wetware is actually more error prone and dangerous than silicone in helm of a car. However people have accepted the risks of wetware malfunction much more readily than silicone. Correcting errors in silicone is a area of a very narrow expertise. Punishing and dealing with inadequcies of humans is what everybody has to deal with. It is a different standard. In a way I understand that if somebody made a factory that produced "only" handcraft perfection and relatiblity that would be horribly broken as a factory but the same class of products made by hand would be perfectly marketable. That is part of the package, technology is only expected to do a very narrow thing but then expected to do the very limited things next to perfect.

[-]TAG10

It is not as if the unconscious is independent of the conscious mind. Habits and reflexes can be learned with conscious effort and then performed "on autopilot". That's how people drive, play sports and.musical instruments etc. The conscious mind is too slow to.perform such behaviours, and they are not instinctive either, so they are trained reflexes. There is a two way causal relationship between the unconscious and the conscious mind: the unconscious mind prompts the conscious mind , and the conscious mind trains the unconscious.