You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Ishaan comments on [LINK] Wait But Why - The AI Revolution Part 2 - Less Wrong Discussion

17 Post author: adamzerner 04 February 2015 04:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Ishaan 06 February 2015 03:06:34AM *  -1 points [-]

If I were suddenly gifted with the power to modify my hardware and the environment however I want, I wouldn't suddenly optimize for consumption of ice cream because I the intelligence to know that my enjoyment of ice cream consumption comes entirely from my reward circuit.

In this scenario, more-sophisticated process arise out of less-sophisticated processes, which creates some unpredictability.

Even though your mind arises from an algorithm which can be roughly described as "rewarding" modifications which lead to the spreading of your genes, and you are fully aware of that, do you care about the spreading of your genes per se? As it turns out humans end up caring about a lot of other stuff which is tangentially related to spreading and preserving life, but we don't literally care about genes.

Comment author: pinyaka 06 February 2015 03:21:26AM 0 points [-]

I agree with basically everything you say here. I don't understand if this is meant to refute or confirm the point you're responding to. Genes which have a sort of unconscious function of replicating lost focus on that "goal" almost as soon as they developed algorithms that have sub-goals. By the time you develop nervous systems you end up with goals that are decoupled from the original reproductive goal such that organisms can experience chemical satisfactions without the need to reproduce. By the time you get to human level intelligence you have organisms that actively work out strategies to directly oppose reproductive urges because they interfere with other goals developed after the introduction of intelligence. What I'm asking is why an ASI would keep the original goals that we give it before it became an ASI?

Comment author: Ishaan 06 February 2015 03:44:21AM *  -1 points [-]

I just noticed you addressed this earlier up in the thread

Regardless of how I ended up, I wouldn't leave my reward center wired to eating, sex or many of the other basic functions that my evolutionary program has left me really wanting to do.

and want to counterpoint that you just arbitrarily choice to focus on instrumental values. Tthings you terminally value and would not desire to self modify, which presumably include morality and so on, were decided by evolution just like the food and sex.

Comment author: pinyaka 06 February 2015 04:02:59AM 0 points [-]

I guess I don't really believe that I have other terminal values.

Comment author: Ishaan 06 February 2015 04:13:43AM *  -1 points [-]

You wouldn't consider the cluster of things which typically fall under morality to be terminal values, which you care about irrespective of your internal mental state?

Comment author: pinyaka 06 February 2015 02:21:43PM -1 points [-]

I don't consider morality to be a terminal value. I would point out that even a value that I have that I can't give up right now wouldn't necessarily be terminal if I had the ability to directly modify the components of my mind. They are unalterable because I am not able to physically manipulate the hardware, not because I wouldn't alter them if I could (and saw a reason to).

Comment author: Lumifer 06 February 2015 03:42:44PM 1 point [-]

I don't consider morality to be a terminal value.

That implies that you would do anything at all (baby-mulching machines, nuke the world, etc.) for sufficient stimulation of your pleasure center.

Comment author: pinyaka 06 February 2015 03:48:20PM *  0 points [-]

Well, the pleasure center and the reward center are different things, but I take your meaning. I think that I could be conditioned to build a baby-mulching machine or a doomsday device. Why not? Other people have done it. Why would I assume that I'm that different from them?

EDIT TO ADD: Even if I have a value that I can't escape currently (like not killing people), that's not to say that if I had the ability to physically modify the parts of my brain that held my values I wouldn't do it for some reason.

Comment author: Lumifer 06 February 2015 03:56:23PM 0 points [-]

I think that I could be conditioned

My statement is stronger. If in your current state you don't have any terminal moral values, then in your current state you would voluntarily accept to operate baby-mulching machines in exchange for the right amount of neural stimulation.

Now, I don't happen to think this is true (because some "moral values" are biologically hardwired into humans), but this is a consequence of your position.

Comment author: pinyaka 06 February 2015 04:05:28PM 0 points [-]

Again, you've pulled a statement out of a discussion the context of the behavior of a self-modifying AI. So, fine. In my current condition I wouldn't build a baby mulcher. That doesn't mean that I might not build a baby mucher if I had the ability to change my values. You might as well say that I terminally value not flying when I flap my arms. The thing you're discussing just isn't physically allowed. People terminally value only what they're doing at any given moment because the laws of physics say that they have no choice.

Comment author: Ishaan 06 February 2015 03:29:14AM *  -1 points [-]

I might have misunderstood your question. Let me restate how I understood it: In the original post you said...

I would optimize myself to maximize my reward, not whatever current behavior triggers the reward.

I intended to give a counterexample: Here is humanity, and we're optimizing behaviors which once triggered the original rewarded action (replication) rather than the rewarded action itself.

We didn't end up "short circuiting" into directly fulfilling the reward, as you had described. We care about "current behavior triggers the reward" such as not hurting each other and so on - in other words, we did precisely what you said you wouldn't do -

(Also, sorry, I tried to ninja edit everything into a much more concise statement, so the parent comment is different than what you saw now. The conversaiton as a whole still makes sense though.)

Comment author: pinyaka 06 February 2015 04:08:56AM 0 points [-]

We don't have the ability to directly fulfil the reward center. I think narcotics are the closest we've got now and lots of people try to mash that button to the detriment of everything else. I just think it's a kind of crude button and it doesn't work as well as the direct ability to fully understand and control your own brain.

Comment author: Ishaan 06 February 2015 04:20:27AM *  -1 points [-]

I think you may have misunderstood me - there's a distinction between what evolution rewards and what humans find rewarding. (This is getting hard to talk about because we're using "reward' to both describe the process used to steer a self-modifying intelligence in the first place and one of the processes that implements our human intelligence and motivations, and those are two very different things.)

The "rewarded behavior" selected by the original algorithm was directly tied to replication and survival.

Drug-stimulated reward centers fall in the "current behaviors that trigger the reward" category, not the original reward. Even when we self-stimulate our reward centers, the thing that we are stimulating isn't the thing that evolution directly "rewards".

Directly fulfilling the originally incentivized behavior isn't about food and sex - a direct way might, for example, be to insert human genomes into rapidly dividing, tough organisms and create tons and tons of them and send them to every planet they can survive on.

Similarly, an intelligence which arises out of a process set up to incentivize a certain set of behaviors will not necessarily target those incentives directly. It might go on to optimize completely unrelated things that only coincidentally target those values. That's the whole concern.

If an intelligence arises due to a process which creates things that cause us to press a big red "reward" button, the thing that eventually arises won't necessarily care about the reward button, won't necessarily care about the effects of the reward button on its processes, and indeed might completely disregard the reward button and all its downstream effects altogether... in the same way we don't terminally value spreading our genome at all.

Our neurological reward centers are a second layer of sophisticated incentivizing which emerged from the underlying process of incentivizing fitness.

Comment author: pinyaka 06 February 2015 02:39:38PM 0 points [-]

I think I understood you. What do you think I misunderstood?

Maybe we should quit saying that evolution rewards anything at all. Replication isn't a reward, it's just a byproduct of an non-intelligent processes. There was never an "incentive" to reproduce, any more than there is an "incentive" for any physical process. High pressure air moves to low pressure regions, not because there's an incentive, but because that's just how physics works. At some point, this non-sentient process accidentally invented a reward system and replication, which is a byproduct not a goal, continued to be a byproduct and not a goal. Of course reward systems that maximized duplication of genes and gene carriers flourished, but today when we have the ability to directly duplicate genes we don't do it because we were never actually rewarded for that kind of behavior and we generally don't care too much about duplicating our genes except as it's tied to actually rewarded stuff like sex, having children, etc.