private_messaging comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 13 September 2013 05:20:58PM *  -1 points [-]

Well, this immediately creates an apparent problem that the AI is going to try to run itself very very fast, which would require resources, and require expansion, if anything, to get energy for running itself at high clock speeds.

I don't think this is what happens either, as the number of reward-moments could be increased to it's maximum by modifications to the mechanism processing the rewards (when getting far enough along the road that starts with the shorting of the wires that go from the button to the AI).

Comment author: TheOtherDave 13 September 2013 06:33:31PM 0 points [-]

I agree that if we posit that increasing "clock speed" requires increasing control of resources, then the system we're hypothesizing will necessarily value increasing control of resources, and that if it doesn't, it might not.

Comment author: private_messaging 13 September 2013 08:09:37PM *  -1 points [-]

So what do you think regarding the second point of mine?

To clarify, I am pondering the ways in which the maximizer software deviates from our naive mental models of it, and trying to find what the AI could actually end up doing after it forms a partial model of what it's hardware components do about it's rewards - tracing the reward pathway.

Comment author: TheOtherDave 13 September 2013 09:05:31PM 0 points [-]

Regarding your second point, I don't think that increasing "clock speed" necessarily requires increasing control of resources to any significant degree, and I doubt that the kinds of system components you're positing here (buttons, wires, etc.) are particularly important to the dynamics of self-reward.

Comment author: private_messaging 13 September 2013 09:12:22PM *  -1 points [-]

I don't have particular opinion with regards to the clock speed either way.

With the components, what I am getting at is that the AI could figure out (by building a sufficiently advanced model of it's implementation) how attain the utility-equivalent of sitting forever in space being rewarded, within one instant, which would make it unable to have a preference for longer reward times.

I raised the clock-speed point to clarify that the actual time is not the relevant variable.

Comment author: TheOtherDave 13 September 2013 10:25:15PM 0 points [-]

It seems to me that for any system, either its values are such that it net-values increasing the number of experienced reward-moments (in which case both actual time and "clock speed" are instrumentally valuable to that system), or is values aren't like that (in which case those variables might not be relevant).

And, sure, in the latter case then it might not have a preference for longer reward times.

Comment author: private_messaging 13 September 2013 10:36:43PM *  -1 points [-]

Agreed.

My understanding is that it would be very hard in practice to "superintelligence-proof" a reward system so that no instantaneous solution is possible (given that the AI will modify the hardware involved in it's reward).

Comment author: TheOtherDave 13 September 2013 10:40:13PM 0 points [-]

I agree that guaranteeing that a system will prefer longer reward times is very hard (whether the system can modify its hardware or not).

Comment author: private_messaging 13 September 2013 11:27:12PM *  0 points [-]

Yes, of course... well even apart from the guarantees, it seems to me that it is hard to build the AI in such a way that it would be unable to find a better solution than to wait

By the way, a "reward" may not be the appropriate metaphor - if we suppose that press of a button results in absence of an itch, or absence of pain, then that does not suggest existence of a drive to preserve itself. Which suggests that the drive to preserve itself is not inherently a feature of utility maximization in the systems that are driven by conditioning, and would require additional work.

Comment author: TheOtherDave 13 September 2013 11:53:10PM 1 point [-]

apart from the guarantees, it seems to me that it is hard to build the AI in such a way that it would be unable to find a better solution than to wait

I'm not sure what the difference is between a guarantee that the AI will not X, on the one hand, and building an AI in such a way that it's unable to X, on the other.

Regardless, I agree that it does not follow from the supposition that pressing a button results in absence of an itch, or absence of pain, or some other negative reinforcement, that the button-pressing system has a drive to preserve itself.

And, sure, it's possible to have a utility-maximizing system that doesn't seek to preserve itself. (Of course, if I observe a utility-maximizing system X, I should expect X to seek to preserve itself, but that's a different question.)