Comment author: DanielLC 10 May 2012 12:26:10AM 0 points [-]

Even if it wasn't, if the gain of adding a person was less than drop in well-being of others, it wouldn't be beneficial unless the AI was able to without prevention, create many more such people.

Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?

We're operating under the assumption that the AI's methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect.

If you allow it to use the same tools but better, it will be enough. If you don't, it's likely to only try to do things humans would do, on the basis that they're not smart enough to do what they really want done.

Comment author: Alerus 10 May 2012 01:29:23AM *  0 points [-]

Do you honestly think a universe the size of ours can only support six billion people before reaching the point of diminishing returns?

That's not my point. The point is people aren't going to be happy if an AI starts making people that are easier to maximize for the sole reason that they're easier to maximize. This will suggest a problem to us by the very virtue that we are discussing hypotheticals where doing so is considered a problem by us.

If you allow it to use the same tools but better, it will be enough. If you don't, it's likely to only try to do things humans would do, on the basis that they're not smart enough to do what they really want done.

You seem to be trying to break the hypothetical assumption on the basis that I have not specified a complete criteria that would prevent an AI from rewiring the human brain. I'm not interested in trying to find a set of rules that would prevent an AI from rewiring human's brain (and I never tried to provide any, that's why it's called an assumption), because I'm not posing that as a solution to the problem. I've made this assumption to try and generate discussion all the problems where it will break down since typically discussion seems to stop at "it will rewire us". Trying to assert "yeah but it would rewire because you haven't strongly specified how it couldn't" really isn't relevant to what I'm asking since I'm trying to get specifically at what it could do besides that.

Comment author: DanielLC 09 May 2012 11:09:42PM 0 points [-]

Assuming humans don't want the AI to make new people that are simply easier to maximize, if it created a new person, all people on the earth view this negatively and their well-being drops.

I'm not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.

Indeed it's difficult to say precisely, that's why I used what we can do now as analogy. I can't really rewire a person's values at all except through persuasion or other such methods.

An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.

Even our best neuroscientists can't do that unless I'm ignorant to some profound advances.

Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don't think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn't convince people to want diamonds.

Comment author: Alerus 09 May 2012 11:48:01PM 0 points [-]

I'm not sure how common it is, but I at least consider total well-being to be important. The more people the better. The easier to make these people happy, the better.

You must also consider that well-being need not be defined as a positive function. Even if it wasn't, if the gain of adding a person was less than drop in well-being of others, it wouldn't be beneficial unless the AI was able to without prevention, create many more such people.

An AI is much better at persuasion than you are. It would pretty much be able to convince you whatever it wants.

I'm sure it'd be better than me (unless I'm also heavily augmented by technology, but we can avoid that issue for now). On what grounds can you say that it'd be able to persuade me to anything it wants? Intelligence doesn't mean you can do anything and think this needs to be justified.

Our best neuroscientists are still mere mortals. Also, even among mere mortals, making small changes towards someones values are not difficult, and I don't think significant changes are impossible. For example, the consumer diamond industry would be virtually non-existant if De Beers didn't convince people to want diamonds.

I know they're mere mortals. We're operating under the assumption that the AI's methods of value manipulation are limited to what we can do ourselves, in which case rewiring is not something we can do with any great affect. The point of the assumption is to ask what the AI could do without more direct manipulation. To that end, only persuasion has been offered and as I've stated, I'm not seeing a compelling argument for why an AI could persuade anyone to anything.

Comment author: Normal_Anomaly 09 May 2012 09:20:33PM 5 points [-]

suppose we had an AI with a utilitarian utility function of maximizing subjective human well-being (meaning, well-being is not something as simple as physical sensation of "pleasure" and depends on the mental facts of each person) and let us also assume the AI can model this "well" (lets say at least as well as the best of us can deduce the values of another person for their well-being)

You've crammed all the difficulty of FAI into this sentence. An additional limit on how much it can manipulate us does little if anything to make this part easier, and adds the additional complication of how strict this limitation should be. The question of how much FAI would manipulate us is an interesting one, but either it's a small part of the problem or it's something that will be subsumed in the main question of "what do we want?". By the latter I mean that we may decide that the best way to decide how much FAI should change our values is to have it calculate our CEV, the same way that the FAI will decide what economic system to implement.

Comment author: Alerus 09 May 2012 09:39:11PM 0 points [-]

This is not meant to be a resolution to FAI since you can't stop technology. It's meant to highlight whether the bad behavior of AI ends up being due to future technology to more directly change humanity. I'm asking the question because the answer to this may give insights as to how to tackle the problem.

Comment author: ErikM 09 May 2012 07:57:38PM *  1 point [-]

"Finally, we will also assume that the AI does not possess the ability to manually rewire the human brain to change what a human values. In other words, the ability for the AI to manipulate another person's values is limited by what we as humans are capable of today."

I argue that we as humans are capable of a lot of that, and the AI may be able to think faster and draw upon a larger store of knowledge of human interaction.

Furthermore, what justifies this assumption? If we assume a limit that the AI won't manipulate me any more than Bob across the street will manipulate me, then yes the AI is safe, but that limit seems very theoretical. A higher limit that the AI won't manipulate me more than than the most manipulative person in the world isn't very reassuring, either.

Comment author: Alerus 09 May 2012 09:23:11PM 0 points [-]

Can you give examples of what you think humans capability to rewire another's values are?

As for what justifies the assumption? Nothing. I'm not asking it specifically because I don't think AIs will have it, I'm asking it so we can identify where the real problem lies. That is, I'm curious whether the real problem in terms of AI behavior being bad is entirely specific to advances in biological technology to which eventual AIs will have access, but we don't today. If we can conclude this is the case, it might help us in understanding how to tackle the problem. Another way to think of the question I'm asking is take such an AI robot and drop it into todays society. Will it start behaving badly immediately, or will it have to develop technology we don't have today before it can behave badly?

Comment author: DanielLC 09 May 2012 08:11:25PM 3 points [-]

I'd suggest reading Failed Utopia #4-2.

One problem is that if it can create new people, any rules about changing people would be pointless. If it cannot create new people, then it ends up with a Utopia for 6 billion people, which is nothing compared to what could have been.

This could be fixed by letting it rewire human brains, but limiting it to doing what humans would be okay with, if it didn't rewire their brains. This is better, but it still runs into problems in that people wouldn't fully understand what's going on. What you need to do is program it so that it does what people would like if they were smarter, faster, and more the people they wish they were. In other words, use CEV.

Also, it's very hard to define what exactly constitutes "rewiring a human brain". If you make it too general, the AI can't do anything, because that would affect human brains. If you make it too specific, the AI would have some slight limitations on how exactly it messes with people's minds.

Comment author: Alerus 09 May 2012 09:18:58PM -1 points [-]

Thanks for the link, I'll give it a read.

Creating new people is potentially a problem, but I'm not entirely convinced. Let me elaborate. When you say:

What you need to do is program it so that it does what people would like if they were smarter, faster, and more the people they wish they were. In other words, use CEV.

Doesn't this kind of restate in different words that it models human well-being and tries to maximize that? I imagine when you phrased it this way that such an AI wouldn't create new people that are easier to maximize because that isn't what humans would want. And if that's not what humans would want doesn't that just mean it's negatively viewed in their well-being and my original definition suffices? Assuming humans don't want the AI to make new people that are simply easier to maximize, if it created a new person, all people on the earth view this negatively and their well-being drops. In fact, it may lead to humans shutting the AI down, so the AI deduces that it cannot create new people that are easier to maximize. The only possible hole in that I see is if the AI could suddenly create an enormous number of people at once..

Also, it's very hard to define what exactly constitutes "rewiring a human brain". If you make it too general, the AI can't do anything, because that would affect human brains. If you make it too specific, the AI would have some slight limitations on how exactly it messes with people's minds.

Indeed it's difficult to say precisely, that's why I used what we can do now as analogy. I can't really rewire a person's values at all except through persuasion or other such methods. Even our best neuroscientists can't do that unless I'm ignorant to some profound advances. The most we can really do is tweak pleasure centers (which as I stated isn't the metric for well-being) or effectively break the brain so the person is non-operational, but I'd argue that non-operational humans have effectively zero measure of well-being anyway (for similar reasons as to why I'd say a bug has a lower scale of well-being than a human does).

Is friendly AI "trivial" if the AI cannot rewire human values?

-5 Alerus 09 May 2012 05:48PM

I put "trivial" in quotes because there are obviously some exceptionally large technical achievements that would still need to occur to get here, but suppose we had an AI with a utilitarian utility function of maximizing subjective human well-being (meaning, well-being is not something as simple as physical sensation of "pleasure" and depends on the mental facts of each person) and let us also assume the AI can model this "well" (lets say at least as well as the best of us can deduce the values of another person for their well-being). Finally, we will also assume that the AI does not possess the ability to manually rewire the human brain to change what a human values. In other words, the ability for the AI to manipulate another person's values is limited by what we as humans are capable of today. Given all this, is there any concern we should have about making this AI; would it succeed in being a friendly AI?

One argument I can imagine for why this fails friendly AI is the AI would wire people up to virtual reality machines. However, I don't think that works very well, because a person (except Cypher from the Matrix) wouldn't appreciate being wired into a virtual reality machine and having their autonomy forcefully removed. This means the action does not succeed in maximizing their well-being.

But I am curious to hear what arguments exist for why such an AI might still fail as a friendly AI.

Comment author: Alerus 08 May 2012 11:51:46PM 0 points [-]

So I think my basic problem here is I'm not familiar with this construct for decision making or why it would be favored over others. Specifically, why make logical rules about which actions to take? Why not take an MDP value-learning approach where the agent chooses an action based on which action has the highest predicted utility. If the estimate is bad, it's merely updated and if that situation arises again, the agent might choose a different action as a result of the latest update to it.

Comment author: Alerus 08 May 2012 04:19:32PM 4 points [-]

I feel like the suggested distinction between bayes and science is somewhat forced. Before I knew of bayes, I knew of Occam's razor and its incredible role in science. I had always been under the impression that science favored simpler hypotheses. If it is suggested that we don't see people rigorously adhering to bayes theorem when developing hypotheses, then the answer to why is not because science doesn't value the simpler hypotheses suggested by bayes and priors, but because determining the simplest hypothesis is incredibly difficult to do in many cases. And this difficulty is acknowledged in the post. As is such, I'm not seeing science as diverging from bayes, the way its practiced is just a consequence of the admitted difficulty of finding the correct priors and determining the space of hypotheses.

Comment author: momothefiddler 08 May 2012 03:11:57AM 0 points [-]

Hm. If people have approximately-equivalent utility functions, does that help them all accomplish their utility better? If so, it makes sense to have none of them value stealing (since having all value stealing could be a problem). In a large enough society, though, the ripple effect of my theft is negligible. That's beside the point, though.

"Avoid death" seems like a pretty good basis for a utility function. I like that.

Comment author: Alerus 08 May 2012 03:00:22PM 1 point [-]

Yeah I agree that the ripple effect of your personal theft would be negligible. I see it as similar to littering. You do it in a vacuum, no big deal, but when many have that mentality, it causes problems. Sounds like you agree too :-)

Comment author: momothefiddler 07 May 2012 08:05:35PM 0 points [-]

I'm not saying I can change to liking civil war books. I'm saying if I could choose between A) continuing to like scifi and having fantasy books, or B) liking civil war books and having civil war books, I should choose B, even though I currently value scifi>stats>civil war. By extension, if I could choose A) continuing to value specific complex interactions and having different complex interactions, or B) liking smiley faces and building a smiley-face maximizer I should choose B even though it's counterintuitive. This one is somewhat more plausible, as it seems it'd be easier to build an AI that could change my values to smiley faces and make smiley faces than it would be to build one that works toward my current complicated (and apparently inconsistent) utility function.

I don't think society-damaging actions are "objectively" bad in the way you say. Stealing something might be worse than just having it, due to negative repercussions, but that just changes the relative ordering. Depending on the value of the thing, it might still be higher-ordered than buying it.

Comment author: Alerus 07 May 2012 08:47:58PM 0 points [-]

Right, so if you can choose your utility function, then it's better to choose one that can be better maximized. Interestingly though, if we ever had this capability, I think we could just reduce the problem by using an unbiased utility function. That is, explicit preferences (such as liking math versus history) would be removed and instead we'd work with a more fundamental utility function. For instance, death is pretty much a universal stop point since you cannot gain any utility if you're dead, regardless of your function. This would be in a sense the basis of your utility function. We also find that death is better avoided when society works together and develops new technology. Your actions then might be dictated by what you are best at doing to facilitate the functioning and growth of society. This is why I brought up society damaning as being potentially objectively worse. You might be able to come up with specific instances of actions that we associate as society-damaging that seem okay, such as specific instances of stealing, but then they aren't really society damaging in the grand scheme of things. That said, I think as a rule of thumb stealing is bad in most cases due to the ripple effects of living in a society in which people do that, but that's another discussion. The point is there may be objectively better choices even if you have no explicit preferences for things (or you can choose your preferences).

Of course, that's all conditioned on whether you can choose your utility function. For our purposes for the foreseeable future, that is not the case and so you should stick with expected utility functions.

View more: Prev | Next