red75 comments on Value Deathism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (118)
I'm not sure I understand you. Values of the original agent specify a class of programs it can become. Which program of this class should deal with observations?
Forget? Is it about "too smart to optimize"? This meaning I didn't intend.
When computer encounters borders of universe, it will have incentive to explore every possibility that it is not true border of universe such as: active deception by adversary, different rules of game's "physics" for the rest of universe, possibility that its universe is simulated and so on. I don't see why it is rational for it to ever stop checking those hypotheses and begin to optimize universe.