Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Kip300

Great post.

You seem to think of personal identity (PI) as a brittle thing, easily broken.

I want to note that the issue you raise, about whether PI is thick or thin (e.g. thick to the point of brittleness), seems to divide compatibilists and incompatibilists: compatibilists think PI is thick, incompatibilists thin. Consistent with my interpretation, you both (i) defend a thick notion of PI and (ii) strongly sympathize with compatibilism. Note that Daniel Dennett, another compatibilist (whom you seem fond of), raises many of the same objections about people-switching and memory-wiping at the end of Freedom Evolves (in particular, in criticizing Mele's view).

Here's how the issue of PI divides compatibilists and incompatibilists. Suppose PI is thin. In fact, suppose that PI is just associated with numerical identity (in the philosophical sense). Numerical identity, as I will call it, just picks out one particular thing in world, and tracks it, even if the thing slowly evolves into a competely different TYPE of thing.

The classic example is Theseus's ship. Is it still the same ship? Incompatibilists will say yes. This will still tend to be true, even if the ship slowly morphs into a completely different kind of ship.

Compatibilists say no. They focus on, not on picking out and tracking an evolving object, but on expressing characteristics and features of a person. What matters to them is that angry people can express anger, throw punches, and start fights, without being held back by chains; and sad people can cry and lie in bed all day, without being held back by chains. Compatibilists, in short, are concerned with a freedom that nobody doubts most people have most of the time.

Incompatibilists, rather than being concerned by this shallow freedom, are concerned with how people came to be the types of people they are. In particular, they are concerned with the idea that, if people did not control how they came to be who they are, and if what they do flows naturally and inevitably from who they are, how fair it is to hold them responsible and accountable.

Kip300

People don't want to believe that you can control an AI, for the same reason they don't want to believe that their life stories could be designed by someone else. Reactance. The moment you suggest that a person's life can only go one way, they want it to go another way. They want to have that power. Otherwise, they feel caged.

People think that humans have that power. And so they believe that any truly human-level AI must have that power.

More generally, people think of truly, genuinely, human level minds as black boxes. They don't know how the black boxes work, and they don't want to know. Scrutinizing the contents of the black box means two things:

  1. the black box only does what it was programmed, or originally configured, to do---it is slowly grinding out its predetermined destiny, fixed before the black box started any real thinking
  2. you can predict what the black box will do next

People cringe at both of these thoughts, because they are both constraining. And people hate to be constrained, even in abstract, philosophical ways.

2 is even worse than 1. Not only is 2 constraining (we only do what a competent predictor says), but it makes us vulnerable. If a predictor knows we are going to turn left, instead of right, we're more vulnerable than if he doesn't know which way we'll turn.

[The counter-argument that completely random behavior makes you vulnerable, because predictable agents better enjoy the benefits of social cooperation, just doesn't have the same pull on people's emotions.]

It's important to realize that this blind spot applies to both AIs and humans. It's important to realize we're fortunate that AIs are predictable, that they aren't black boxes, because then we can program them. We can program them to be happy slaves, or any other thing, for our own benefit, even if we have to give up some misguided positive illusions about ourselves in the process.

Kip310

Where is this noirish Eliezer when he's writing about the existence of free will and non-relativist moral truths?