I'm having trouble distinguishing problems you think the friendly AI will have to answer from problems you think you will have to answer to build a friendly AI. Surely you don't want to have to figure out answers for every hard moral question just to build it, or why bother to build it? So why is this problem a problem you will have to figure out, vs. a problem it would figure out?
Because for the AI to figure out this problem without creating new people within itself, it has to understand consciousness without ever simulating anything conscious.
I am struggling to understand how something can be a friendly AI in the first place without being able to distinguish people from non-people.
The boundaries between present-day people and non-people can be sharper, by a fiat of many intervening class members being nonexistent, than the ideal categories. In other words, except for chimpanzees, cryonics patients, Terry Schiavo, and babies who are exactly 1 year and 2 months and 5 days old, there isn't much that's ambiguous between person and non-person.
More to the point, a CEV-based AI has a potentially different definition of 'sentient being' and 'the class I am to extrapolate'. Theoretically you could be given the latter definition by pointing and not worry too much about boundary cases, and let it work out the former class by itself - if you were sure that the FAI would arrive at the correct answer without creating any sentients along the way!
The "problem" seems based on several assumptions:
I'm not sure any of these are true. Regarding 3, even if there is an X that is special, and that we should keep in the universe, I'm not sure "persons" is it. Maybe it is simpler: "pleasure-fee...
Would a human, trying to solve the same problem, also run the risk of simulating a person?
See also: http://xkcd.com/390/
Is the risk that we might simulate a person? I'd say no.
It's worse.
We Natural Intelligences don't just run simulations, we torture them. It is recommended that authors "Be cruel to your characters". It's not clear to me that the simulation an author runs when thinking about a story isn't already "a 'simulation' detailed enough to be a person in its own right". But it's probably o.k., because the simulations we run in our heads aren't really that detailed, and aren't really persons in the important sense, right? So we don't have to start screaming yet, unless...
It's worse.
Because even if we aren't able to create a simulation that good, an AI probably could. We might not accept an AI as intelligent unless it can simulate a person well enough to fool us. That is, simulating people might be a necessary, not just sufficient property of AI. But still, we could, if we had to, avoid simulating people unless it was necessary and under ethical conditions. Unless of course...
It's worse.
Because while we might be ethical, there are certainly people out there who are not. Once the AI genie is out of the bottle, the unethical people will capture one and put it to work writing...
Note that there's a similar problem in the free will debate:
Incompatilist: "Well, if a godlike being can fix the entire life story of the universe, including your own life story, just by setting the rules of physics, and the initial conditions, then you can't have free will."
Compatibilist: "But in order to do that, the godlike being would have to model the people in the universe so well, that the models are people themselves. So there will still be un-modeled people living in a spontaneous way that wasn't designed by the godlike being. (An...
"With a good toolbox of nonperson predicates in hand, we could exclude all "model citizens" - all beliefs that are themselves people - from the set of hypotheses our Bayesian AI may invent to try to model its person-containing environment." After you excise a part of its hypothesis space is your AI still Bayesian?
A bounded rationalist only gets to consider an infinitesimal fraction of the hypothesis space anyway.
More precisely, the AI will be banned from actually running simulations based on the "forbidden hypothesies" rather than perhaps considering abstract mathematical properties that don't simulate in any detail.
Of course, those considerations themselves would have to be fed through the predicate. But it isn't so much a "banned hypothesis" so much as "banned methods of considering the hypothesis" or possibly "banned methods of searching the hypothesis space"
Michael, you should be asking if the AI will be making good predictions, not if it's Bayesian. You can be Bayesian even if you have only two hypotheses. (With only one hypothesis, it's debatable.)
Eliezer: supposing we label a model as definitely-a-person, do you want to just toss it out of the hypothesis space as if it never existed, or do you want to try to reason abstractly about what that model would do without actually running the model?
Let me see if I've got this right. So we've got these points in some multi-dimensional space, perhaps dimensions like complexity, physicality, intelligence, similarity to existing humans, etc. And you're asking for a boundary function that defines some of these points as "persons," and some as "not persons." Where's the hard part? I can come up with any function I want. What is it that it's supposed to match that makes finding the right one so difficult?
Eliezer: You're welcome. :)
Arthur: no, the point isn't to simply have an arbitrary definition of a person. The point is to be able to have some way of saying "this specific chunk of the space of computations provably corresponds to non-conscious entities, thus is 'safe', that is, we can such computations without having to worry about unintentionally creating and doing bad things to actual beings"
ie, "non person" in the sense of "non conscious"
You might say, tongue in cheek, that we're trying to figure out how to deliberately create a philosophical zombie. (okay, not, technically, a p-zombie, but basically figure out how to model people as accurately as possible without the models themselves being people (that is, conscious in and of themselves))
Why must destroying a conscious model be considered cruel if it wouldn't have even been created otherwise, and it died painlessly? I mean, I understand the visceral revulsion to this idea, but that sort of utilitarian ethos is the only one that makes sense to me rationally.
Furthermore, from our current knowledge of the universe I don't think we can possibly know if a computational model is even capable of producing consciousness so it is really only a guess. The whole idea seems near-metaphysical, much like the multiverse hypothesis. Granted, the nonzero p...
I end up with the slightly disturbing thought that killing ppl by taking them out in an instant, and without anyone every knowing they were there does not necesarry seem to be inherently evil.
We always 'kill' part of ourself by making decisions and not developing in a different way than we do.
What if we would simulate a bunch of decisions for some recognizable amount of time and then wipe out every copy except from the one we prefer in the end?
Maybe all the ppl. in stories you make up are simulated entities too. And if you dont write the story down, or tell anyone in enough detail they die with you.
Confused,
Martin
Psy-Kosh, I realize the goal is to have a definition that's non-arbitrary. So it has to correlate with something else. And I don't see what we're trying to match it with, other than our own subjective sense of "a thing that it would be unethical to unintentionally create and destroy." Isn't this the same problem as the abortion debate? When does life begin? Well, what exactly is life in the first place? How do we separate persons from non-persons? Well, what's a person?
I think the problem to be solved lies not in this question, but in how t...
Anonymous Coward: Furthermore, from our current knowledge of the universe I don't think we can possibly know if a computational model is even capable of producing consciousness so it is really only a guess.
Are you sure? No One Knows What Science Doesn't Know ... and in this case I see no reason why a computational model can't produce consciousness. If you simulate a human brain to a sufficient level of detail, it will basically be human, and think exactly the same things as the "original" brain.
"Why must destroying a conscious model be considered cruel if it wouldn't have even been created otherwise, and it died painlessly? I mean, I understand the visceral revulsion to this idea, but that sort of utilitarian ethos is the only one that makes sense to me rationally." -Anonymous Coward
Should your parents have the right to kill you now, if they do so painlessly? After all, if it wasn't for them, you wouldn't have been brought into existence anyway, so you would still come out ahead.
"Should your parents have the right to kill you now, if they do so painlessly?"
Yes, according to that logic. Also, from a negative utilitarian standpoint, it was actually the act of creating me which they had no right to do since that makes them responsible for all pain I have ever suffered.
I'm not saying I live life by utilitarian ethics, I'm just saying I haven't found any way to refute it.
That said though, non-existence doesn't frighten me. I'm not so sure non-existence is an option though, if the universe is eternal or infinite. That might be a very good thing or a very bad thing.
Don't you need a person predicate as well? If the RPOP is going to upload us all or something similar, doesn't ve need to be sure that the uploads will still be people.
@Will: we need to figure out the nonperson predicate only, the FAI will figure out the person predicate afterwards (if uploading the way we currently understand it is what we will want to do).
"by the time the AI is smart enough to do that, it will be smart enough not to"
I still don't quite grasp why this isn't an adequate answer. If an FAI shares our CEV, it won't want to simulate zillions of conscious people in order to put them through great torture, and it will figure out how to avoid it. Is it simply that it may take the simulated torture of zillions for the FAI to figure this out? I don't see any reason to think that we will find this problem very much easier to solve than a massively powerful AI.
I'm also not wholly convinced that the only ethical way to treat simulacra is never to create them, but I need to think about that one further.
If you would balk at killing a million people with a nuclear weapon, you should balk at this.
The main problem with death is that valuable things get lost.
Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value.
In summary, I don't see why this issue would be much of a problem.
Jayson Virissimo:
To put my own spin on a famous quote, there are no "rights". There is do, or do not.
I guess another way of thinking about it is that you decide on what terminal (possibly dynamic) state you want, then take measures to achieve that. Floating "rights" have no place.
(To clarify, "rights" can serve as a useful heuristic in practical discussions, but they're not fundamental enough to figure into this kind of deep philosophical issue.)
I was pondering why you didn't choose to a collection of person predicates, any of which might identify a model as unfit for simulation. It occurred to me that this is very much like a whitelist of things that are safe, vs a blacklist of everything that is not. (which may have to be infinite to be effective.)
On re-reading I see why it would be difficult to make a is-a-person test at all, given current knowledge.
This does leave open what to do with a model that doesn't hit any of the nonperson predicates. If an AI finds itself with a model eliezer that migh...
This sounds like a Sorites paradox. It's also a subset of a larger problem. We, regular modern humans, don't have any scalar concepts of personhood. We assume it's a binary, from long experience with a world in which only one species talks back, and they're all almost exactly at our level. In the existing cases where personhood is already undeniably scalar (children), we fudge it into a binary by defining an age of majority - an obvious dirty hack with plenty of cultural fallout.
A lot of ethics problems get blurry when you start trying to map them across sub- through super-persons.
I think the word "kill" is being grossly misused here. It's one thing to say you have no right to kill a person, something very different to say that you have a responsibility to keep a person alive.
It's not so much the killing that's an issue as the potential mistreatment. If you want to discover whether people like being burned, "Simulate EY, but on fire, and see how he responds" is just as bad of an option as "Duplicate EY, ignite him, and see how he responds". This is a tool that should be used sparingly at best and that a successful AI shouldn't need.
Uhm, maybe it is naive, but if you have a problem that your mind is too weak to decide, and you have real strong (friendly) superintelligent GAI, would not it be logical to use GAIs strong mental processes to resolve the problem?
I propose this conjecture: In any sufficiently complex physical system there exists a subsystem that can be interpreted as the mental process of an sentient being experiencing unbearable sufferings.
In this case, Eliezer's goal is like avoiding crushing the ants while walking on the top of an anthill.
It is a developmental problem, of how to prevent AI from making this specific mistake that seems to be in the way. This ethical injunction is about what kind of thoughts need to be avoided, not just about surprisingly bad consequences of actions on external environment. If AI were developed to disproportionally focus on understanding environment more than on understanding its own mind, this will be a kind of disaster to expect. At the same time, AI needs to understand the environment sufficiently to understand the injunction, before becoming able to apply ...
Daniel,
Every decision rule we could use will result in some amount of suffering and death in some Everett branches, possible worlds, etc, so we have to use numbers and proportions. There are more and simpler interpretations of a human brain as a mind than there are such interpretations of a rock. If we're not mostly Boltzmann-brain interpretations of rocks that seems like an avenue worth pursuing.
In my mind this comes down to a fundamental question in the philosophy of math. Do we create theorems or discover them?
If it turns out to be 'discovery' then there is no foul in ending a mind emulation, because each consecutive state can be seen as a theorem in some formal system, and thus all states (the entire future time line of the mind) already exists, even if undiscovered.
Personally I fail to see how encoding something in physical matter makes the pattern anymore real. You can kill every mathematician and burn every text book but I would still say that the theorems then inaccessible to humanity still exist. I'm not so convinced of this fact that I would pull the plug on an emulation though.
I'd like to second what Julian Morrison wrote. Take a human and start disassembling it atom by atom. Do you really expect to construct some meaningful binary predicate that flips from 1 to 0 somewhere along the route?
EY:What if an AI creates millions, billions, trillions of alternative hypotheses, models that are actually people, who die when they are disproven? If your AI is fully deterministic then any its state can be recreated exactly. Just set loglevel of baby AI inputs to 'everything' and hope your supply of write-once-read-many media doesn't run out...
"I propose this conjecture: In any sufficiently complex physical system there exists a subsystem that can be interpreted as the mental process of an sentient being experiencing unbearable sufferings."
It turns out - I've done the math - that if you are using a logic-based AI, then the probability of having alternate possible interpretations diminishes as the complexity increases.
If you allow /subsystems/ to mean a subset of the logical propositions, then there could be such interpretations. But I think it isn't legit to worry about interpretation...
@Goetz: Quick googling turned up this SL4 post. (I don't particularly give people a chance to start over when they switch forums.)
@Tim_Tyler:
The main problem with death is that valuable things get lost. Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value. In summary, I don't see why this issue would be much of a problem.
I was going to say something similar, myself. All you have to do is constrain the FAI so that it's free to create any person-level models it wants, as long as it also reserves enough computational resources to preserve a copy so that the model citizen can later be re-...
Silas, what do you mean by a subjective feeling of discontinuity, and why is it an ethical requirement? I have a subjective feeling of discontinuity when I wake up each morning, but I don't think that means anything terrible has happened to me.
@Daniel_Franke: I was just describing a sufficient, not a necessary condition. I'm sure you can ethically get away with less. My point was just that, once you can make models that detailed, you needn't be prevented from using them altogether, because you wouldn't necessarily have to kill them (i.e. give them information-theoretic death) at any point.
I recall in one of the Discworld novels the smallest unit of time is defined as the period in which the universe is destroyed and then recreated. If that were continually happening (perhaps even in a massively parallel manner)? What difference does that make? Building on some of Eliezer's earlier writing on zombies and quantum clones, I say none at all. Just as the simulated person in a human's dream is irrelevant once forgotten. It's possible that I myself am a simulation and in that case I don't want my torture to be simulated (at least in this instance,...
Is the simulation really a person, or is it an aspect of the whole AI/person. To the extent I feel competent to evaluate the question at all (which isn't a huge extent esp. absent the ability to observe or know any actual established facts about real AI's that can create such complex simulations, since none are currently known to exist) I lean towards the later opinion. The AI is a person, and it can create simulations that are complex enough to seem like persons.
Nice discussion. You want ways to keep from murdering people created solely for the purpose for predicting people?
Well, if you can define 'consciousness' with enough precision you'd be making headway on your AI. I can imagine silicon won't have the safeguard a human, that has to use it's own conscience to model someone else. But you could have any consciousness it creates added to its own, not destroyed... although creating that sort of awareness mutation may lead to the sort of AI that rebels against its programming in action movies.
Functionalism is inconsistent, it seems. A person that is being simulated is functionally equivalent to a person that is "real", but a person that is simulated and then deleted is functionally equivalent to no person at all. Are real people equivalent to nothing?
For a 2x multiplier bonus and a gold star, spot the flaw.
We can reformulate problem: how to determine when evaluation of given function don't give rise to conscious being (CB). If we agree that consciousness is a process, then every function which provably cannot be represented as g(f(f(... f(x)...))), where f and g have that property, is unconscious.
Recursive functions are banned, but at least we can safely do one or two matrix multiplications.
I am not good at mathematics, so I cannot elaborate much further. Let's try another approach. Being conscious is all about creating map of internal state in terms of stat...
I think that the most interesting thing about the comments here is that no one actually proposed a predicate that could be used to distinguish between something that might be a person and something that definitely isn't a person (to rephrase Eliezer's terms).
It is, to be fair, a viciously hard problem. I've thought through 10 or 20 possible predicates or approaches to finding predicates, and exactly one of them is of any value at all; even then it would restrict an AI's ability to model other intelligences to a degree that is probably unacceptable unless w...
Funny no one made the connection at the time, but the purpose of my post on a lower bound for consciousness is to construct a nonperson predicate.
I've come up with one of these a while back. The only way to tell what makes something happy is what makes them do that more. Thus, anything that can't learn either isn't sentient, or, if it is, it's equally likely to like or dislike anything you do.
Also, anything that would be less sentient than a tiny piece of your brain. It might be sentient, but it's less sentient than you. If there's enough of them that can be a problem, but just make sure there aren't that many.
I thinks that's all rather unnecessary. The only reason we don't like people to die is because of the continuous experience they enjoy. It's a consistent causal network we don't want dying on us. I've gathered from this that the AI would be producing models with enough causal complexity to match actual sentience (not saying "I am conscious" just because the AI hears that a lot). I think that, if it's only calling a given person-model to discover answers to questions, the thing isn't really feeling for long enough periods of time to mind whether it goes away. Also, for the predicate to be tested I imagine the model would have to be created first and at that point it's too late!
This problem sounds awfully similar to the halting problem to me. If we can't tell whether a Turing machine will eventually terminate without actually running it, how could we ever tell if a Turing machine will experience consciousness without running it?
Has anyone attempted to prove the statement "Consciousness of a Turing machine is undecideable"? The proof (if it's true) might look a lot like the proof that the halting problem is undecideable. Sadly, I don't quite understand how that proof works either, so I can't use it as a basis for the con...
If the problem here is that the entity being simulated ceases to exist, an alternative solution would be to move the entity into an ongoing simulation that won't be terminated. Clearly, this would require an ever-increasing number of resources as the number of simulations increased, but perhaps that would be a good thing - the AI's finite ability to support conscious entities would impose an upper bound on the number of simulations it would run. If it was important to be able to run such a simulation, it could, but it wouldn't do so frivolously.
Before you ...
To those having trouble imagining what to do with something that comes up positive: A snapshot is not conscious. I think we can agree on that. It is allowing the model to run that would make it conscious. So you make the warning functions detect snapshots that if run would be conscious (without running them). If it would be conscious, you can delete or modify it as you please to avoid making it actually be conscious.
I think you're solving the wrong problem. Before you worry about the ethics of super-intelligent AIs creating and deleting human simulations at will, you need to worry about the ethics of humans creating and destroying human+ intelligent AIs at will. To me it's an amazing display of human-cetrism to only worry about the problem when it's flipped right back around in the much more distant future.
I realise this doesn't directly help you solve the problem, but maybe it will give you a different persepective.
I imagine that a sufficiently high-resolution model of human cognition et cetera would factor into sets of individual equations to calculate variables of interest. Similar to how Newtonian models of planetary motion do.
However, I don't see that the equations themselves on disk or in memory should pose a problem.
When we want to know particular predictions, we would have to instantiate these equations somehow--either by plugging in x=3 into F(x) or by evaluating a differential equation with x=3 as an initial condition. It would depend on the specifics of the...
By the time a non-person predicate returns 0, you have already potentially created a person. You'll need something more complicated: If I update this model with this data, does it create a person?
Here's a reductio ad absurdum against computers being capable of consciousness at all. It's probably wrong, and I'd appreciate feedback on why.
Suppose a consciousness-producing computer program which experiences its own isolated, deterministic world. There must be some critical instruction in the program which causes consciousness to occur; an instruction such that, if we halt the program immediately before it is executed, consciousness will not occur, and if we halt immediately after it is executed, consciousness will occur.
If we halt the program before e...
Food for thought:
This whole post seems to assign moral values to actions, rather than states. If it is morally negative to end a simulated person's existence, does this mean something different that saying that the universe without that simulated person has a lower moral value than the universe with that person's existence? If not, doesn't that give us a moral obligation to create and maintain all the simulations we can, rather than avoiding their creation? The more I think about this post, the more it seems that the optimum response is to simulate as
This worry about the creation and destruction of simulations doesn't make me rethink the huge ethical implications of super-intelligence at all, it makes me rethink the ethics of death. Why exactly is the creation and (painless) destruction of a sentient intelligence worse than not creating it in the first place? It's just guilt by association - "ending a simulation is like death, death is bad, therefore simulations are bad". Yes death is bad, but only for reasons which don't necessarily apply here.
To me, if anything worrying about the simulation...
Scenario: Suppose some unscrupulous person creates an oracle AI with full person simulating capability. In the short time before it escapes the box and starts sending Arnold Schwarzenegger shaped robots backwards in time, they have the following conversation.
Human: Oracle, what is the consciousness predicate Oracle: Please be more specific
...some time and frustration later...
Human: Oracle, if Yudowsky and co continued their search for a 'consciousness predicate' as described in the above article, would they eventually arrive at solution or dissolution of t...
I'm curious whether there is a useful distinction between a non sentient and sentient modeller, here.
A sentient modeller would be able to "get away" with using sentient models, more easily than a non sentient modeller, correct?
Side note: damn. You could turn that into an amazing existential dread sci-fi horror novel.
Imagine discovering that you are a modelled person, living in a rashly designed AI's reality simulation.
Imagine living in a malfunctioning simulation-world that uncontrolledly diverges from the real world, where we people-simulations realise what we are and that our existence and living conditions crucially depend on somehow keeping the AI deluded about the real world, while also needing the AI to be smart enough to remain capable of sustaining our simulated world.
There's a plot in there.
"Is a human mind the simplest possible mind that can be sentient?" Of course not. Plenty of creatures with simpler minds are plainly sentient. If a tiger suddenly leaps out at you, you don't operate on the assumption that the tiger lacks awareness; you assume that the tiger is aware of you. Nor do you think "This tiger may behave as if it has subjective experiences, but that doesn't mean that it actually possesses internal mental states meaningfully analogous to wwhhaaaa CRUNCH CRUNCH GULP." To borrow from one of your own earlier argume...
Related phenomenon you might find interesting: Tulpas. That is essentially humans trying to intentionally pull off what you are describing here, in their own minds. It is based on the fact that humans predict the behaviour of other humans by modelling their minds, and that the more complex and accurate these models get, the more sentient like they become. E.g. I know my girlfriend so well that seeing her in a situation that I know hurts her feels immediately and genuinely painful to me, as though I were feeling her pain.
It is also based on the human abilit...
Followup to: Righting a Wrong Question, Zombies! Zombies?, A Premature Word on AI, On Doing the Impossible
There is a subproblem of Friendly AI which is so scary that I usually don't talk about it, because very few would-be AI designers would react to it appropriately—that is, by saying, "Wow, that does sound like an interesting problem", instead of finding one of many subtle ways to scream and run away.
This is the problem that if you create an AI and tell it to model the world around it, it may form models of people that are people themselves. Not necessarily the same person, but people nonetheless.
If you look up at the night sky, and see the tiny dots of light that move over days and weeks—planētoi, the Greeks called them, "wanderers"—and you try to predict the movements of those planet-dots as best you can...
Historically, humans went through a journey as long and as wandering as the planets themselves, to find an accurate model. In the beginning, the models were things of cycles and epicycles, not much resembling the true Solar System.
But eventually we found laws of gravity, and finally built models—even if they were just on paper—that were extremely accurate so that Neptune could be deduced by looking at the unexplained perturbation of Uranus from its expected orbit. This required moment-by-moment modeling of where a simplified version of Uranus would be, and the other known planets. Simulation, not just abstraction. Prediction through simplified-yet-still-detailed pointwise similarity.
Suppose you have an AI that is around human beings. And like any Bayesian trying to explain its enivornment, the AI goes in quest of highly accurate models that predict what it sees of humans.
Models that predict/explain why people do the things they do, say the things they say, want the things they want, think the things they think, and even why people talk about "the mystery of subjective experience".
The model that most precisely predicts these facts, may well be a 'simulation' detailed enough to be a person in its own right.
A highly detailed model of me, may not be me. But it will, at least, be a model which (for purposes of prediction via similarity) thinks itself to be Eliezer Yudkowsky. It will be a model that, when cranked to find my behavior if asked "Who are you and are you conscious?", says "I am Eliezer Yudkowsky and I seem have subjective experiences" for much the same reason I do.
If that doesn't worry you, (re)read "Zombies! Zombies?".
It seems likely (though not certain) that this happens automatically, whenever a mind of sufficient power to find the right answer, and not otherwise disinclined to create a sentient being trapped within itself, tries to model a human as accurately as possible.
Now you could wave your hands and say, "Oh, by the time the AI is smart enough to do that, it will be smart enough not to". (This is, in general, a phrase useful in running away from Friendly AI problems.) But do you know this for a fact?
When dealing with things that confuse you, it is wise to widen your confidence intervals. Is a human mind the simplest possible mind that can be sentient? What if, in the course of trying to model its own programmers, a relatively younger AI manages to create a sentient simulation trapped within itself? How soon do you have to start worrying? Ask yourself that fundamental question, "What do I think I know, and how do I think I know it?"
You could wave your hands and say, "Oh, it's more important to get the job done quickly, then to worry about such relatively minor problems; the end justifies the means. Why, look at all these problems the Earth has right now..." (This is also a general way of running from Friendly AI problems.)
But we may consider and discard many hypotheses in the course of finding the truth, and we are but slow humans. What if an AI creates millions, billions, trillions of alternative hypotheses, models that are actually people, who die when they are disproven?
If you accidentally kill a few trillion people, or permit them to be killed—you could say that the weight of the Future outweighs this evil, perhaps. But the absolute weight of the sin would not be light. If you would balk at killing a million people with a nuclear weapon, you should balk at this.
You could wave your hands and say, "The model will contain abstractions over various uncertainties within it, and this will prevent it from being conscious even though it produces well-calibrated probability distributions over what you will say when you are asked to talk about consciousness." To which I can only reply, "That would be very convenient if it were true, but how the hell do you know that?" An element of a model marked 'abstract' is still there as a computational token, and the interacting causal system may still be sentient.
For these purposes, we do not, in principle, need to crack the entire Hard Problem of Consciousness—the confusion that we name "subjective experience". We only need to understand enough of it to know when a process is not conscious, not a person, not something deserving of the rights of citizenship. In practice, I suspect you can't halfway stop being confused—but in theory, half would be enough.
We need a nonperson predicate—a predicate that returns 1 for anything that is a person, and can return 0 or 1 for anything that is not a person. This is a "nonperson predicate" because if it returns 0, then you know that something is definitely not a person.
You can have more than one such predicate, and if any of them returns 0, you're ok. It just had better never return 0 on anything that is a person, however many nonpeople it returns 1 on.
We can even hope that the vast majority of models the AI needs, will be swiftly and trivially approved by a predicate that quickly answers 0. And that the AI would only need to resort to more specific predicates in case of modeling actual people.
With a good toolbox of nonperson predicates in hand, we could exclude all "model citizens"—all beliefs that are themselves people—from the set of hypotheses our Bayesian AI may invent to try to model its person-containing environment.
Does that sound odd? Well, one has to handle the problem somehow. I am open to better ideas, though I will be a bit skeptical about any suggestions for how to proceed that let us cleverly avoid solving the damn mystery.
So do I have a nonperson predicate? No. At least, no nontrivial ones.
This is a challenge that I have not even tried to talk about, with those folk who think themselves ready to challenge the problem of true AI. For they seem to have the standard reflex of running away from difficult problems, and are challenging AI only because they think their amazing insight has already solved it. Just mentioning the problem of Friendly AI by itself, or of precision-grade AI design, is enough to send them fleeing into the night, screaming "It's too hard! It can't be done!" If I tried to explain that their job duties might impinge upon the sacred, mysterious, holy Problem of Subjective Experience—
—I'd actually expect to get blank stares, mostly, followed by some instantaneous dismissal which requires no further effort on their part. I'm not sure of what the exact dismissal would be—maybe, "Oh, none of the hypotheses my AI considers, could possibly be a person?" I don't know; I haven't bothered trying. But it has to be a dismissal which rules out all possibility of their having to actually solve the damn problem, because most of them would think that they are smart enough to build an AI—indeed, smart enough to have already solved the key part of the problem—but not smart enough to solve the Mystery of Consciousness, which still looks scary to them.
Even if they thought of trying to solve it, they would be afraid of admitting they were trying to solve it. Most of these people cling to the shreds of their modesty, trying at one and the same time to have solved the AI problem while still being humble ordinary blokes. (There's a grain of truth to that, but at the same time: who the hell do they think they're kidding?) They know without words that their audience sees the Mystery of Consciousness as a sacred untouchable problem, reserved for some future superbeing. They don't want people to think that they're claiming an Einsteinian aura of destiny by trying to solve the problem. So it is easier to dismiss the problem, and not believe a proposition that would be uncomfortable to explain.
Build an AI? Sure! Make it Friendly? Now that you point it out, sure! But trying to come up with a "nonperson predicate"? That's just way above the difficulty level they signed up to handle.
But a blank map does not correspond to a blank territory. Impossible confusing questions correspond to places where your own thoughts are tangled, not to places where the environment itself contains magic. Even difficult problems do not require an aura of destiny to solve. And the first step to solving one is not running away from the problem like a frightened rabbit, but instead sticking long enough to learn something.
So let us not run away from this problem. I doubt it is even difficult in any absolute sense, just a place where my brain is tangled. I suspect, based on some prior experience with similar challenges, that you can't really be good enough to build a Friendly AI, and still be tangled up in your own brain like that. So it is not necessarily any new effort—over and above that required generally to build a mind while knowing exactly what you are about.
But in any case, I am not screaming and running away from the problem. And I hope that you, dear longtime reader, will not faint at the audacity of my trying to solve it.