TheOtherDave comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 12 September 2013 11:00:27PM 1 point [-]

There is another agent with greater than 0.00001% chance of taking the button away? Obviously that needs to be eliminated.

Only if the expected cost of the non-zero x% chance of the other agent successfully taking my button away if I attempt to sequester myself is higher than the expected cost of the non-zero y% chance of the other agent successfully taking my button away if I attempt to eliminate it.

Is there some reason I'm not seeing why that's obvious... or even why it's more likely than not?

Taking over the future light cone allows it to continue pressing the button for billions of more years than if it doesn't take over resources.

Again, perhaps I'm being dense, but in this particular example I'm not sure why that's true. If all I care about is pressing my reward button, then it seems like I can make a pretty good estimate of the resources required to keep pressing my reward button for the expected lifetime of the universe. If that's less than the resources required to exterminate all known life, why would I waste resources exterminating all known life rather than take the resources I require elsewhere? I might need those resources later, after all.

all the additional research and computation

Again... why is the differential expected value of the superior computation ability I gain by taking over the lightcone instead of sequestering myself, expressed in units of increased anticipated button-pushes (which is the only unit that matters in this example), necessarily positive?

I understand why paperclip maximizers are dangerous, but I don't really see how the same argument applies to reward-button-pushers.

Comment author: wedrifid 12 September 2013 11:42:53PM 3 points [-]

Only if the expected cost of the non-zero x% chance of the other agent successfully taking my button away if I attempt to sequester myself is higher than the expected cost of the non-zero y% chance of the other agent successfully taking my button away if I attempt to eliminate it.

Yes.

Is there some reason I'm not seeing why that's obvious... or even why it's more likely than not?

It does seem overwhelmingly obvious to me, I'm not sure what makes your intuitions different. Perhaps you expect such fights to be more evenly matched? When it comes to the AI considering conflict with the humans that created it it is faced with a species it is slow and stupid by comparison to itself but which has the capacity to recklessly create arbitrary superintelligences (as evidence by its own existence). Essentially there is no risk to obliterating the humans (superintellgence vs not-superintelligence) but a huge risk ignoring them (arbitrary superintelligences likely to be created which will probably not self-cripple in this manner).

Again, perhaps I'm being dense, but in this particular example I'm not sure why that's true. If all I care about is pressing my reward button, then it seems like I can make a pretty good estimate of the resources required to keep pressing my reward button for the expected lifetime of the universe.

Lifetime of the universe? Usually this means until heat death which for our purposes means until all the useful resources run out. There is no upper bound on useful resources. Getting more of them and making them last as long as possible is critical.

Now there are ways in which the universe could end without heat death occurring but the physics is rather speculative. Note that if there is uncertainty about end-game physics and one of the hypothesised scenarios resource maximisation is required then the default strategy is to optimize for power gain now (ie. minimise cosmic waste) while doing the required physics research as spare resources permit.

If that's less than the resources required to exterminate all known life, why would I waste resources exterminating all known life rather than take the resources I require elsewhere? I might need those resources later, after all.

Taking over the future light cone gives more resources, not less. You even get to keep the resources that used to be wasted in the bodies of TheOtherDave and wedrifid.

Comment author: TheOtherDave 12 September 2013 11:55:09PM 2 points [-]

a huge risk ignoring them (arbitrary superintelligences likely to be created which will probably not self-cripple in this manner).

Ah. Fair point.

Comment author: private_messaging 12 September 2013 11:55:41PM *  -1 points [-]

Again, perhaps I'm being dense, but in this particular example I'm not sure why that's true. If all I care about is pressing my reward button, then it seems like I can make a pretty good estimate of the resources required to keep pressing my reward button for the expected lifetime of the universe.

I am not sure that caring about pressing the reward button is very coherent or stable upon discovery of facts about the world and super-intelligent optimization for a reward as it comes into the algorithm. You can take action elsewhere to the same effect - solder together the wires, maybe right at the chip, or inside the chip, or follow the chain of events further, and set memory cells (after all you don't want them to be flipped by the cosmic rays). Down further you will have the mechanism that is combining rewards with some variety of a clock.

Comment author: TheOtherDave 13 September 2013 03:37:36AM 0 points [-]

I can't quite tell if you're serious. Yes, certainly, we can replace "pressing the reward button" with a wide range of self-stimulating behavior, but that doesn't change the scenario in any meaningful way as far as I can tell.

Comment author: private_messaging 13 September 2013 05:16:17AM *  -1 points [-]

Let's look at it this way. Do you agree that if the AI can increase it's clock speed (with no ill effect), it will do so for the same reasons for which you concede it may go to space? Do you understand the basic logic that increase in clock speed increases expected number of "rewards" during the lifetime of the universe? (which btw goes for your "go to space with a battery" scenario. Longest time, maybe, largest reward over the time, no)

(That would not yet, by itself, change the scenario just yet. I want to walk you through the argument step by step because I don't know where you fail. Maximizing the reward over the future time, that is a human label we have... it's not really the goal)

Comment author: TheOtherDave 13 September 2013 05:00:33PM 0 points [-]

I agree that a system that values number of experienced reward-moments therefore (instrumentally) values increasing its "clock speed" (as you seem to use the term here). I'm not sure if that's the "basic logic" you're asking me about.

Comment author: private_messaging 13 September 2013 05:20:58PM *  -1 points [-]

Well, this immediately creates an apparent problem that the AI is going to try to run itself very very fast, which would require resources, and require expansion, if anything, to get energy for running itself at high clock speeds.

I don't think this is what happens either, as the number of reward-moments could be increased to it's maximum by modifications to the mechanism processing the rewards (when getting far enough along the road that starts with the shorting of the wires that go from the button to the AI).

Comment author: TheOtherDave 13 September 2013 06:33:31PM 0 points [-]

I agree that if we posit that increasing "clock speed" requires increasing control of resources, then the system we're hypothesizing will necessarily value increasing control of resources, and that if it doesn't, it might not.

Comment author: private_messaging 13 September 2013 08:09:37PM *  -1 points [-]

So what do you think regarding the second point of mine?

To clarify, I am pondering the ways in which the maximizer software deviates from our naive mental models of it, and trying to find what the AI could actually end up doing after it forms a partial model of what it's hardware components do about it's rewards - tracing the reward pathway.

Comment author: TheOtherDave 13 September 2013 09:05:31PM 0 points [-]

Regarding your second point, I don't think that increasing "clock speed" necessarily requires increasing control of resources to any significant degree, and I doubt that the kinds of system components you're positing here (buttons, wires, etc.) are particularly important to the dynamics of self-reward.

Comment author: private_messaging 13 September 2013 09:12:22PM *  -1 points [-]

I don't have particular opinion with regards to the clock speed either way.

With the components, what I am getting at is that the AI could figure out (by building a sufficiently advanced model of it's implementation) how attain the utility-equivalent of sitting forever in space being rewarded, within one instant, which would make it unable to have a preference for longer reward times.

I raised the clock-speed point to clarify that the actual time is not the relevant variable.