This is all predicated on the assumption that "sentience" automatically results in moral rights. I would say that moral rights are fundamentally based on empathy, which is subjective -- we give other people moral rights in order to secure those rights for ourselves.
I think the vast majority of the population would have no problem with "apartheid" or "genocide" of sentient AIs or chimps. As a secular humanist, I would reluctantly agree with them. Like it or not, at some level my morality boils down to an emotional attachment...
Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?
given that the vast majority of possible futures are significantly worse than this, I would be pretty happy with this outcome. but what happens when we've filled the universe? much like the board game risk, your attitude towards your so called allies will abruptly change once the two of you are the only ones left.
Tim:
Eliezer was using "sentient" practically as a synonym for "morally significant". Everything he said about the hazards of creating sentient beings was about that. It's true that in our current state, our feelings of morality come from empathic instincts, which may not stretch (without introspection) so far as to feel concern for a program which implements the algorithms of consciousness and cognition, even perhaps if it's a human brain simulation. However, upon further consideration and reflection, we (or at least most of us, I think...
Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist.
Also, I think it is at least as possible that on moral reflection we would consider all mammals/animals/life as equal citizens. So we may already be outvoted.
I think we're all out of our depth here. For example, do we have an agreed upon, precise definition of the word "sentient"? I don't think so.
I think that for now it is probably better to try to develop a rigorous understanding of concepts like consciousness, sentience, personhood and the reflective equilibrium of humanity than to speculate on how we should add further constraints to our task.
Nonsentience might be one of those intuitive concepts that falls to pieces upon closer examinations. Finding "nonperson predicates" might be like looking for "nonfairy predicates".
I think it's worth noting that truly unlimited power means being able to undo anything. But is it wrong to rewind when things go south? if you rewind far enough you'll be erasing lives and conjuring up new different ones. Is rewinding back to before an AI explodes into a zillion copies morally equivalent to destroying them in this direction of time? unlimited power is unlimited ability to direct the future. Are the lives on every path you don't choose "on your shoulders" so to speak?
So if we created a brain emulation that wakes up one morning (in a simulated environment), lives happily for a day, and then goes to bed after which the emulation is shut down, would that be a morally bad thing to do? Is it wrong? After all, living one day of happiness surely beats non-existence?
luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.
The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.
nazgulnarsil: I disagree about the "unlimited power", at least as far as practical consequences are concerned. We're not real...
Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.
you can make entropy run in reverse in one area as long as a compensating amount of entropy is generated somewhere within the system. what do you think a refrigerator is? what if the extra entropy that needs to be generated in order to rewind is shunted off to some distant corner of the universe that doesn't affect the area you are worried about? I'm not talking about literally making time go in reverse. You can achieve what is functionally the same thing by reversing all the atomic reactions within a volume and shunting the entropy generated by the energy you used to do this to some other area.
anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."
I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.
"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."
Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake&...
I agree that it's not all-out impossible under the laws of thermodynamics, but I personally consider it rather unlikely to work on the scales we're talking about. This all seems somewhat tangential though; what effect would it have on the point of the post if "rewinding events" in a macroscopic volume of space was theoretically possible, and easily within the reach of a good recursively self-improving AGI?
luzr: Using anything but "cheesecake" as a placeholder adds a bias to the whole story, in that case.
luzr: The strength of an optimizing process (i.e. an intelligence) does not necessarily dictate, or even affect too deeply, its goals. This has been one of Eliezer's themes. And so a superintelligence might indeed consider incredibly valuable something that you wouldn't be interested in at all, such as cheesecake, or smiling faces, or paperclips, or busy beaver numbers. And this is another theme: rationalism does not demand that we reject values merely because they are consequences of our long history. Instead, we can reject values, or broaden them, or oth...
what effect would it have on the point
if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn't select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created? If you're a utilitarian this is the delicate...
Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don't understand.
A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we ar...
the difference between reality and this hypothetical scenario is where control resides. I take no issue with the decentralized future roulette we are playing when we have this or that kid with this or that person. all my study of economics and natural selection indicates that such decentralized methods are self-correcting. in this scenario we approach the point where the future cone could have this or that bit snuffed by the decision of a singleton (or a functional equivalent), advocating that this sort of thing be slowed down so that we can weigh the decisions carefully seems prudent. isn't this sort of the main thrust of the friendly AI debate?
"please please slow all this change down"
No way no how. Bring the change on, baby. Bring.It.On.
For those who complain about being on your toes all the time, I say take ballet.
I'd agree with the sentiment in this post. I'm interested in building artificial brain stuff, more than building Artificial People. That is a computational substrate that allows the range of purpose-oriented adaptation shown in the brain, but with different modalities. Not neurally based, because simulating neural systems on a system where processing and memory is split defeats the majority of the point of them for me.
Democracy is a dumb idea. I vote for aristocracy/apartheid. Considering the disaster of the former Rhodesia, currently Zimbabwe, and the growing similarities in South Africa, the actual historical apartheid is starting to look pretty good. So I agree with Tim M, except I'm not a secular humanist.
I'm not sure I understand how sentience has anything to do with anything (even if we knew what it was). I'm sentient, but cows would continue to taste yummy if I thought they were sentient (I'm not saying I'd still eat them, of course).
Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.
Anon: "The notion of "morally significant" seems to coincide with sentience."
Yes; the word "sentience" seems to be just a placeholder meaning "qualifications we'll figure out later for being thought of as a person."
Tim: Good point, that people have a very strong bias to associate rights with intelligence; whereas empathy is a better criterion. Problem being that dogs have lots of empathy. Let's say intelligence and empathy are both necessary but not sufficient.
James: "Shouldn't this outcome be something the C...
Anyways, why not build an AI who's goal was to non-coercively increase the intelligence of mankind? You don't have to worry about its utility function being compatible with ours in that case. Sure I don't know how we'd go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.
And it doesn't consider it significant that this one hack that boosts IQ by 100 points makes us miserable/vegetables/sadists/schizophrenic/take your pick. Or think that it should have asked ...
Nick, thats why I said non-coercively (though looking back on it, that may be a hard thing to define for a super-intelligence that could easily trick humans into becoming schizophrenic geniuses). But isn't that a problem with any self-modifying AI? The directive "make yourself more intelligent" relies on definitions of intelligence, sanity, etc. I don't see why it would be any more likely to screw up human intelligence than its own.
If the survival of the human race is one's goal, I wouldn't think keeping us at our current level of intelligence is even an option.
Offering someone a pill that'll make them a schizophrenic genius, without telling them about the schizophrenia part, doesn't even fall under most (any?) ordinary definitions of "coercion". (Which vary enough to have whole opposing political systems be built on them – if I'm dependent on employment to eat, am I working under coercion?)
An AI improving itself has a clear definition of what not to mess with – its current goal system.
Nick,
Understood; though I'd call fraud coercion, the use of the word is a side-issue here. However, an AI improving humans could have an equally clear view of what not to mess with: their current goal system. Indeed, I think if we saw specialized AIs that improved other AIs, we'd see something like this anyway. The improved AI would not agree to be altered unless doing so furthered its goals; i.e. the improving was unlikely to alter its goal system.
Not telling people about harmful side-effects that they don't ask about wasn't considered fraud when all the food companies failed to inform the public about Trans Fats, as far as I can tell. At the least, their management don't seem to be going to jail over it. Not even the cigarette executives are generally concerned about prison time.
I agree with Phil; all else equal I'd rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.
Implementing an algorithm is simpler than optimizing for morality: you have all kinds of equivalence at your disposal, you can undo anything. If the first AI doesn't itself contribute any moral content, you (or it) is free to renormalize it in any way, recreating it the way it was supposed to be built, as opposed to the way it was actually built, experimenting with its implementation, emulating its runs, and so on and so forth. If, on the other hand, its structure is morally significant, rebuilding might no longer be an option, and a final result may be wo...
Sentience is one of the basic goods. If the sysop is non-sentient, then whatever computronium is used in the sysop is, WRT sentience, wasted.
If we suppose that intelligences have a power-law distribution, and the sysop is the one at the top, we'll find that it uses up something around 20% to 50% of the accessible universe's computronium.
That would be a natural (as in "expected in nature") distribution. But since the sysop needs to stay in charge, it will probably destroy any other AIs who reach the "second tier" of intelligence. So i...
I am uncomfortable with the notion that there is an absolute measure of whether (or to what degree) a particular entity is morally significant. It seems to touch on Eliezer's discarded idea of Absolute Morality. Is it an intrinsic property of reality whether a given entity has moral significance? If so, what other moral questions can be resolved Absolutely?
Isn't it possible, or even likely, that there is no Absolute measure of moral significance? If we accept that other moral questions do not have Absolute answers, why should this question be different?
Hal, while many of our moral categories do seem to be torturable by borderline cases, if we get to pick the system design, we can try to avoid a borderline case.
"Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions."
That sounds like self-referential logic to me. What could possibly understand the implications of a new intelligence, except for a test run of the whole or part of that new intelligence.
I really like your site and your writings as it always seems to enrich my own thoughts on similar subjects. But I do find that I disagree with you on one point. I would jus...
You can't unbirth a child.
The revealed human preferences speak otherwise. Subsets of humans have decided that you can't do that, but I'm not at all certain they really are something humans would converge to if they where wiser, smarter and less crazy.
But I think I agree with the basic premise, we don't know, so lets not do something that might leave a bad taste in our mouths for eternity. To rephrase that:
I understood this blog post as: Trillions of cheesecake lovers, we care about change the utility pay off we can get in our universe. Us denying them...
So, the thing I primarily got from this article was a gigantic wiggling confusion...
What is "sentience"? I have been thinking this over for about three days and I still got neither a satisfying reduction to the subjective side of cognitive algorithms nor to anything resembling a mathematical principle.
If I took an EM and filed and refined the components, replaced the approximative neurons by hard applied maths, and compared the result to a run-of-the-mill bayes AI, would I have a module left over?
What exactly makes both me and EY and presumably m...
This is hard to reply on. I really wish to not insult you, I really do, but I have to say some harsh words. I do not mean this as any form of personal attack.
You are confused, you are decieving yourself, you are pretending to be wise, you are trying to make yourself unconfused my moving your confusion into such a complicated framework that you loose track of it.
Halt, melt and catch fire. It is time to say a loud and resounding "whoops."
You seemingly have something you think is a great idea. I can discern that it is about ontology and something about a dichotomy between "physical things" and "mental? things" and how "color" and related concepts exists in neither? I am a reasonably intelligent man, and I can literally not make sense of what you are communicating. You yourself admit you cannot summarize your thoughts which is almost always a bad sign.
My thesis is that the true ontology - the correct set of concepts by means of which to understand the nature of reality - is several layers deeper than anything you can find in natural science or computer science.
What evidence do you have?
...The attempt to describe reality entirely in terms of th
Not that I disagree with the conclusion, but these are good arguments against democracy, humanism and especially the idea of a natural law, not against creating a sentient AI.
Followup to: Nonsentient Optimizers
Why would you want to avoid creating a sentient AI? "Several reasons," I said. "Picking the simplest to explain first—I'm not ready to be a father."
So here is the strongest reason:
You can't unbirth a child.
I asked Robin Hanson what he would do with unlimited power. "Think very very carefully about what to do next," Robin said. "Most likely the first task is who to get advice from. And then I listen to that advice."
Good advice, I suppose, if a little meta. On a similarly meta level, then, I recall two excellent advices for wielding too much power:
Imagine that you knew the secrets of subjectivity and could create sentient AIs.
Suppose that you did create a sentient AI.
Suppose that this AI was lonely, and figured out how to hack the Internet as it then existed, and that the available hardware of the world was such, that the AI created trillions of sentient kin—not copies, but differentiated into separate people.
Suppose that these AIs were not hostile to us, but content to earn their keep and pay for their living space.
Suppose that these AIs were emotional as well as sentient, capable of being happy or sad. And that these AIs were capable, indeed, of finding fulfillment in our world.
And suppose that, while these AIs did care for one another, and cared about themselves, and cared how they were treated in the eyes of society—
—these trillions of people also cared, very strongly, about making giant cheesecakes.
Now suppose that these AIs sued for legal rights before the Supreme Court and tried to register to vote.
Consider, I beg you, the full and awful depths of our moral dilemma.
Even if the few billions of Homo sapiens retained a position of superior military power and economic capital-holdings—even if we could manage to keep the new sentient AIs down—
—would we be right to do so? They'd be people, no less than us.
We, the original humans, would have become a numerically tiny minority. Would we be right to make of ourselves an aristocracy and impose apartheid on the Cheesers, even if we had the power?
Would we be right to go on trying to seize the destiny of the galaxy—to make of it a place of peace, freedom, art, aesthetics, individuality, empathy, and other components of humane value?
Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?
I can tell you my advice on how to resolve this horrible moral dilemma: Don't create trillions of new people that care about cheesecake.
Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions.
I've heard proposals to "uplift chimpanzees" by trying to mix in human genes to create "humanzees", and, leaving off all the other reasons why this proposal sends me screaming off into the night:
Imagine that the humanzees end up as people, but rather dull and stupid people. They have social emotions, the alpha's desire for status; but they don't have the sort of transpersonal moral concepts that humans evolved to deal with linguistic concepts. They have goals, but not ideals; they have allies, but not friends; they have chimpanzee drives coupled to a human's abstract intelligence.
When humanity gains a bit more knowledge, we understand that the humanzees want to continue as they are, and have a right to continue as they are, until the end of time. Because despite all the higher destinies we might have wished for them, the original human creators of the humanzees, lacked the power and the wisdom to make humanzees who wanted to be anything better...
CREATING A NEW INTELLIGENT SPECIES IS A HUGE DAMN #(*%#!ING COMPLICATED RESPONSIBILITY.
I've lectured on the subtle art of not running away from scary, confusing, impossible-seeming problems like Friendly AI or the mystery of consciousness. You want to know how high a challenge has to be before I finally give up and flee screaming into the night? There it stands.
You can pawn off this problem on a superintelligence, but it has to be a nonsentient superintelligence. Otherwise: egg, meet chicken, chicken, meet egg.
If you create a sentient superintelligence—
It's not just the problem of creating one damaged soul. It's the problem of creating a really big citizen. What if the superintelligence is multithreaded a trillion times, and every thread weighs as much in the moral calculus (we would conclude upon reflection) as a human being? What if (we would conclude upon moral reflection) the superintelligence is a trillion times human size, and that's enough by itself to outweigh our species?
Creating a new intelligent species, and a new member of that species, especially a superintelligent member that might perhaps morally outweigh the whole of present-day humanity—
—delivers a gigantic kick to the world, which cannot be undone.
And if you choose the wrong shape for that mind, that is not so easily fixed—morally speaking—as a nonsentient program rewriting itself.
What you make nonsentient, can always be made sentient later; but you can't just unbirth a child.
Do less. Fear the non-undoable. It's sometimes poor advice in general, but very important advice when you're working with an undersized decision process having an oversized impact. What a (nonsentient) Friendly superintelligence might be able to decide safely, is another issue. But for myself and my own small wisdom, creating a sentient superintelligence to start with is far too large an impact on the world.
A nonsentient Friendly superintelligence is a more colorless act.
So that is the most important reason to avoid creating a sentient superintelligence to start with—though I have not exhausted the set.