Lumifer comments on Thought experiment: The transhuman pedophile - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (74)
Link? Under what conditions?
Example of somebody making that claim.
It seems to me a rational agent should never change its self-consistent terminal values. To act out that change would be to act according to some other value and not the terminal values in question. You'd have to say that the rational agent floats around between different sets of values, which is something that humans do, obviously, but not ideal rational agents. The claim then is that ideal rational agents have perfectly consistent values.
"But what if something happens to the agent which causes it too see that its values were wrong, should it not change them?" Cue a cascade of reasoning about which values are "really terminal."
Only a static, an unchanging and unchangeable rational agent. In other words, a dead one.
All things change. In particular, with passage of time both the agent himself changes and the world around him changes. I see absolutely no reason why the terminal values of a rational agent should be an exception from the universal process of change.
Why wouldn't you expect terminal values to charge? Does your agent have some motivation (which leads it to choose to change) other than its terminal values. Or is it choosing to change its terminal values in pursuit of those values? Or are the terminal value changing involuntarily?
In the first case, the things doing the changing are not the real terminal values.
In the second case, that doesn't seem to make sense.
In the third case, what we're discussing is no longer a perfect rational agent.
What exactly do you mean by "perfect rational agent"? Does such a creature exist in reality?