Comment author: JoshuaZ 10 May 2012 04:39:28AM 1 point [-]

I'm not sure why running a complex society needs to be a condition. If we all revert to hunter-gatherers then it still satisfies the essential conditions.

That's a problem even if it isn't a doomsday scenario. Changes in animal welfare attitudes would probably make most of us unhappy, but having a society where torturing cute animals to death wouldn't hurt running a complex society. Similarly, allowing infanticide would work fine (heck for that one I can think of some pretty decent arguments why we should allow it). And while not a doomsday scenario, other scenarios that could suck a lot can also be constructed. For example, you could have a situation where we're all stuck with 1950s gender roles. That would be really bad but wouldn't destroy a complex society.

Comment author: Alerus 10 May 2012 01:47:27PM 0 points [-]

Hunter gathers is not something sustainable for a large scale complex society. It is not a position we would favor at all and I'm struggling to see why an AI would try to make us value that set up or how you think a society with technology strong enough to make strong AI would be able to be convinced to it.

Views of killing animals is more flexible as the reason humans object to it seems to come from a level of innate compassion for life itself. So I could see that value being more manipulatable as a result. I don't see what that has to do with a doomsday set of values though.

1950s gener roles were abandoned because (1) women didn't like it (in which case maximizing people's well being would suggest not having such gender roles) and (2) it was less productive for society in that suppressing women limits the set of contributions to society.

I don't think you've presented here a set of doomsday values to which humans could be manipulated to holding by persuasion alone or demonstrated why they would be a set of values the AI would prefer humans to have for maximization.

Comment author: JoshuaZ 10 May 2012 03:27:30AM 0 points [-]

The point that other humans fought against it doesn't change the central point that a very large fraction of humans could have a radically different effective morality. Moreover, if Germany hadn't gone to war but had instead done the exact same thing to its internal minorities, most of the world likely would not have intervened.

If you don't like this example so much, one can just look at changing attitudes on many issues. See for example Pinker's book "The Better Angels of Our Nature" where he documents extreme changes in historical attitudes about the ethics of violence. For example, war is considered much more of a negative now than it was a few centuries ago. Going to war to gain territory is essentially unthinkable today. Similarly, attitudes about animals have changed a lot. In the Middle Ages, forms of entertainment that were considered normal included not just bear bating and similar actions but such crude behavior as lighting a cat on fire and seeing how long it took to die. Our moral attitudes are very much a product of our culture and how we are raised.

Comment author: Alerus 10 May 2012 04:04:37AM *  1 point [-]

Most of our changes to where we are now seem to be a result of what works better in complex society and I therefore have difficulty accepting that a society in the highly advanced state it would be in by the time we had strong AI could be pushed to a non-productive doomsday set of values. So lets make the argument more clear then: what set of values do you think the AI could push us to through persuasion that would be effectively what we consider a doomsday scenario while and allowed the AI to more easily satisfy well-being?

Comment author: faul_sname 10 May 2012 02:27:19AM 2 points [-]

Humans can radically change the values of humans through weak social pressure alone.

Comment author: Alerus 10 May 2012 02:42:18AM 1 point [-]

I feel like I've already responded to this argument multiple times in various other responses I've made. If you think there's something I've overlooked in those responses let me know, but this seems like a restatement of things I've already addressed. Also, if there is something in one of the responses I've made with which you disagree and have a different reason than what's been presented, let me know.

Comment author: DanArmak 10 May 2012 12:08:00AM 0 points [-]

All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that's what creates opinions.

This is completely wrong. Again, you give "persuasion" a very narrow scope.

A baby is born without language, certainly without many opinions. It can be shaped by its environment ("persuasion") to be almost anything. Certainly, very few of the extremely diverse cultures and sub-cultures known from history have had any trouble raising their kids to behave like the adults, with only a small typical proportion of adolescents who left for another society. And these people had no understanding of how the brain really works - unlike what a superintelligent AI might have.

Short version: it doesn't matter if people do think for themselves, because they only get to think about their sensory inputs and the AI can control those. Even a perfect Bayesian superintelligence would reach any conclusion you wished if you truly fully controlled all the information it ever received (as long as it had no priors of 0 or 1).

This entire site is based around getting people to not be arbitrarily malleable and to require rationality in making decisions [...] Is this site and community a failure then?

If you end up in an environment controlled by an unfriendly AI, having read this site won't help you; it's game over. LW rationality skills work in some worlds, not in any possible world.

Regarding actions that cause outrage I never said you were constrained by the outrage of others. I said an AI that maximizes human well-being is not going to take actions that cause extreme outrage.

How is this different from saying it's not going to let me take actions that cause extreme outrage? I hope you aren't planning on building an AI that has a sense of personal responsibility and doesn't care if humans subvert its utility function as long as it didn't cause them to do so.

Comment author: Alerus 10 May 2012 02:02:43AM -2 points [-]

There is a profound difference between being persuasive and manipulating all sensory input of a human. Is your argument not that it would try to persuade but that an AI would hook up all humans to a computer that controlled everything we perceived? If you want to make that your argument, I'm game for discussing it, but I think it should be made clear that this is a very different argument than an AI trying to change people's minds through persuasion. But lets discuss it. This suggestion of manipulating the senses of humans seems to imply a massive use of technology and integration of the technology by the AI not available today, but that's okay, we should expect technology to improve incredibly by the time we can make strong AI. But so long as we're assuming that such huge amounts of improved technology with large integration is available and would allow the AI to pull the wool over everyone's eyes, we must also consider that humans have made use of that technology themselves to better themselves and provide wildly intelligent computer security systems such that it seems a stretch to me to posit that an AI could do this without anyone noticing.

How is this different from saying it's not going to let me take actions that cause extreme outrage? I hope you aren't planning on building an AI that has a sense of personal responsibility and doesn't care if humans subvert its utility function as long as it didn't cause them to do so.

I suppose if your actions were extreme enough in the outrage they caused we might make a case for those actions needing to be thwarted, even by the reasoning of the AI. I don't know you, but my guess is you're thinking perhaps of religious fundamentalists feelings about you? Such outrage on its own is (1) somewhat limited and counterbalanced by others and (2) counter productive for humanity to act upon in which case the better argument is not to thwart your actions but work toward behavior of tolerance. But lets contrast this to an AI trying to effectively replace mankind with easily satisfied humans and consider how people would respond to that. I think its clear that humans would work toward shutting such an AI down and would respond with extreme concern for their livelihood. The fact that we're sitting her talking about how this is doomsday scenario seems to be evidence of that concern. Given that, it just doesn't seem to be in the AIs interest to make that choice; it would simply cause too much of a collapse in the well-being of humanity with their profound concern for the situation.

Comment author: DanielLC 10 May 2012 12:26:10AM 0 points [-]

Even if it wasn't, if the gain of adding a person was less than drop in well-being of others, it wouldn't be beneficial unless the AI was able to without prevention, create many more such people.

Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?

We're operating under the assumption that the AI's methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect.

If you allow it to use the same tools but better, it will be enough. If you don't, it's likely to only try to do things humans would do, on the basis that they're not smart enough to do what they really want done.

Comment author: Alerus 10 May 2012 01:29:23AM *  0 points [-]

Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?

That's not my point. The point is people aren't going to be happy if an AI starts making people that are easier to maximize for the sole reason that they're easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.

If you allow it to use the same tools but better, it will be enough. If you don't, it's likely to only try to do things humans would do, on the basis that they're not smart enough to do what they really want done.

You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I'm not interested in trying to find a set of rules that would prevent an AI from rewiring human's brain (and I never tried to provide any, that's why it's called an assumption), because I'm not posing that as a solution to the problem. I've made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at "it will rewire us". Trying to assert "yeah but it would rewire because you haven't strongly specified how it couldn't" really isn't relevant to what I'm asking since I'm trying to get specifically at what it could do besides that.

Comment author: DanielLC 09 May 2012 11:09:42PM 0 points [-]

Assuming humans don't want the AI to make new people that are simply easier to maximize, if it created a new person, all people on the earth view this negatively and their well-being drops.

I'm not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.

Indeed it's difficult to say precisely, that's why I used what we can do now as analogy. I can't really rewire a person's values at all except through persuasion or other such methods.

An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.

Even our best neuroscientists can't do that unless I'm ignorant to some profound advances.

Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don't think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn't convince people to want diamonds.

Comment author: Alerus 09 May 2012 11:48:01PM 0 points [-]

I'm not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.

You must also consider that well-being need not be defined as a positive function. Even if it wasn't, if the gain of adding a person was less than drop in well-being of others, it wouldn't be beneficial unless the AI was able to without prevention, create many more such people.

An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.

I'm sure it'd be better than me (unless I'm also heavily augmented by technology, but we can avoid that issue for now). On what grounds can you say that it'd be able to persuade me to anything it wants? Intelligence doesn't mean you can do anything and think this needs to be justified.

Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don't think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn't convince people to want diamonds.

I know they're mere mortals. We're operating under the assumption that the AI's methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect. The point of the assumption is to ask what the AI could do without more direct manipulation. To that end, only persuasion has been offered and as I've stated, I'm not seeing a compelling argument for why an AI could persuade anyone to anything.

Comment author: shminux 09 May 2012 10:58:59PM *  5 points [-]

You seem to think that you are living in a magical fair universe. Just because nothing really really bad happened to you/us yet, doesn't mean it cannot.

Comment author: Alerus 09 May 2012 11:37:58PM 1 point [-]

I don't think I live in a fair universe at all. Regardless, acknowledging that we don't live in a fair universe doesn't support your claim that an AI would be able to radically change the values of all humans on earth without outrage from others through persuasion alone.

Comment author: DanArmak 09 May 2012 10:25:17PM 1 point [-]

You underestimate "persuasion alone". Please consider that (by your definition) all human opinions on all subjects that have existed to date, have been created pretty much "by persuasion alone".

Also, I don't want to live in a world where what I'm allowed to do or be is constrained by whether it provokes "outrages from large sects of humanity". There are plenty of sects (properly so called ;-) today that don't want me to continue existing even the way I already am, at least not without major brainwashing.

Comment author: Alerus 09 May 2012 10:42:21PM -4 points [-]

All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that's what creates opinions. Then they might persuade people to have these opinions as well, but clearly persuasion is not the sole source and even then it's not like persuasion is a one-way process where you hit the persuade button and the other person is switched. It seems that your argument is that any human can be persuaded to any opinion at any time and I just can't buy that. Humans are malleable and we've made a huge number of mistakes in the past, but I don't see us as so bad that anyone can have their mind changed to anything regardless of the merit behind it. This entire site is based around getting people to not be arbitrarily malleable and to require rationality in making decisions—that there are objective conclusions and we should strive for them. Is this site and community a failure then? Are all of the people subject to mere persuasion in spite of rationality and cannot think for themselves?

Regarding actions that cause outrage I never said you were constrained by the outrage of others. I said an AI that maximizes human well-being is not going to take actions that cause extreme outrage.

Comment author: DanArmak 09 May 2012 09:36:15PM 3 points [-]

Consider that every human who ever existed, was shaped purely by environment + genes.

Consider how much humans have achieved merely by controlling the environment: converting people to insane religions which they are willing to die and kill for, making torturers, "the banality of evil", etc. etc.

Now imagine what an entity could achieve with that plus 1) complete understanding of how the brain is shaped by the environment and/or 2) complete control of the environment (via VR, smart dust, whatever) for a human from age 0 onwards.

I think the conservative assumption is that any mind we would recognize as human, and many we wouldn't, could be produced by such an optimization process. You're not limiting your AI at all.

Comment author: Alerus 09 May 2012 09:53:41PM -4 points [-]

But the AI isn't being dropped into a completely undeveloped society. It will be dropped into an extremely developed society with values already existing. If the AI were dropped back into the era of early man, I could see major concern. I don't see humanity having the values we've developed being radically and entirely changed into something we consider so unsavory by persuasion alone. That doesn't mean no one could be affected, but I can't see such a thing going down without outrage from large sects of humanity; which is not what the AI wants.

Comment author: drethelin 09 May 2012 09:36:53PM 5 points [-]

If you don't think world war 2 was a large scale effect, then I don't know what to say to you.

Comment author: Alerus 09 May 2012 09:47:24PM -6 points [-]

You make my point right there. World War 2. We went to war in defiance of nazis and refused to be assimilated. People in Germany didn't even like what the nazis were doing. And finally, the nazis didn't care about our outrage and death in the resulting war. An AI trying to maximize well-being, will care profoundly about that, by definition.

View more: Prev | Next