red75 comments on Value Deathism - Less Wrong

26 Post author: Vladimir_Nesov 30 October 2010 06:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread. Show more comments above.

Comment author: red75 15 December 2010 05:27:55AM 0 points [-]

I'm not sure I understand you. Values of the original agent specify a class of programs it can become. Which program of this class should deal with observations?

It's not better to forget some component of values.

Forget? Is it about "too smart to optimize"? This meaning I didn't intend.

When computer encounters borders of universe, it will have incentive to explore every possibility that it is not true border of universe such as: active deception by adversary, different rules of game's "physics" for the rest of universe, possibility that its universe is simulated and so on. I don't see why it is rational for it to ever stop checking those hypotheses and begin to optimize universe.