twanvl comments on The Blue-Minimizing Robot - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
Good point, but the fact that humans are consequentialists (at least partly) doesn't seem to make the problem much easier. Suppose we replace Yvain's blue-minimizer robot with a simple consequentialist robot that has the same behavior (let's say it models the world as a 2D grid of cells that have intrinsic color, it always predicts that any blue cell that it shoots at will turn some other color, and its utility function assigns negative utility to the existence of blue cells). What does this robot "actually want", given that the world is not really a 2D grid of cells that have intrinsic color?
The robot wants to minimize the amount of blue it sees in its grid representation of the world. It can do this by affecting the world with a laser. But it could also change its camera system so that it sees less blue. If there is no term in the utility function that says that the grid has to model reality, then both approaches are equally valid.