NancyLebovitz comments on The mathematics of reduced impact: help needed - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
Sure, I'm not asserting anything about how hard it would be to make an AI smart enough to conquer the universe, only about whether it would want to do so.
OK, actually measuring it would be tricky. AFAIK, designing an AI that cares about features of the environment that it's not directly measuring is another open problem, but that's not specific to satisficers, so I'll skip it here.
Any action whatsoever by the AI will have effects on every particle in its future lightcone. Such effects may be chaotic enough that mere humans can't optimize them, but that doesn't make them small.
Is that the kind of thing you meant by a "mode"? If so, how does it help?
Right, but we also don't want to let Clippy off the hook just because there are other agents in the causal chain between it and the paperclips, if Clippy influenced their decisions or desires.
I can't tell whether you're asserting that "the efficiency of getting others to do its work" is a factual question that sufficiently smart AI will automatically answer correctly, or agreeing with me that it's mostly a values question about what you put in the denominator when defining efficiency?
Would the AI be able to come to a conclusion within those constraints, or might it be snagged by the problem of including the negentropy cost of computing its negentropy cost?