NancyLebovitz comments on The mathematics of reduced impact: help needed - Less Wrong

10 Post author: Stuart_Armstrong 16 February 2012 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: pengvado 17 February 2012 01:32:39PM *  0 points [-]

I'm going to guess this is an easier problem than conquering the universe.

Sure, I'm not asserting anything about how hard it would be to make an AI smart enough to conquer the universe, only about whether it would want to do so.

Could you? The universe is pretty big.

OK, actually measuring it would be tricky. AFAIK, designing an AI that cares about features of the environment that it's not directly measuring is another open problem, but that's not specific to satisficers, so I'll skip it here.

The approach I would try would depend on the modes the agent has to manipulate reality.

Any action whatsoever by the AI will have effects on every particle in its future lightcone. Such effects may be chaotic enough that mere humans can't optimize them, but that doesn't make them small.

Is that the kind of thing you meant by a "mode"? If so, how does it help?

We want to charge Clippy when it thinks and moves, but not when others think and move- but if Clippy can't tell the difference between itself and others, then that'll be really hard to do.

Right, but we also don't want to let Clippy off the hook just because there are other agents in the causal chain between it and the paperclips, if Clippy influenced their decisions or desires.

Clippy will probably try to shirk and get others to do its work- but that may be efficient behavior, and it should learn that's not effective if it's not efficient.

I can't tell whether you're asserting that "the efficiency of getting others to do its work" is a factual question that sufficiently smart AI will automatically answer correctly, or agreeing with me that it's mostly a values question about what you put in the denominator when defining efficiency?

Comment author: NancyLebovitz 19 February 2012 12:32:02PM 0 points [-]

Would the AI be able to come to a conclusion within those constraints, or might it be snagged by the problem of including the negentropy cost of computing its negentropy cost?