Jiro comments on The Least Convenient Possible World - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
Okay, well let's apply exactly the technique discussed above:
If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?
Note: I'm asking that in order to give another example of the technique in action. But still feel free to give a real answer if you'd choose to.
Side you didn't answer the question one way or another, I can't apply the second technique here. I can't ask what would have to change in order for you to change your answer.
What if we ignore the VR question? Omega tells you that killing and eating your children will make you maximally happy. Should you do it?
Omega can't tell you that doing X makes you maximally happy unless doing X actually makes you maximally happy. And a scenario where doing X actually makes you maximally happy may be a scenario where you are no longer human and don't have human preferences.
Omega could, of course, also say "you are mistaken when you conclude that being maximally happy in this scenario is not a human preference". However,