CynicalOptimist comments on The Least Convenient Possible World - Less Wrong

165 Post author: Yvain 14 March 2009 02:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: CynicalOptimist 24 April 2016 12:45:50PM *  0 points [-]

Okay, well let's apply exactly the technique discussed above:

If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?

Note: I'm asking that in order to give another example of the technique in action. But still feel free to give a real answer if you'd choose to.

Side you didn't answer the question one way or another, I can't apply the second technique here. I can't ask what would have to change in order for you to change your answer.

Comment author: thrawnca 27 July 2016 02:08:55AM *  0 points [-]

If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?

This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I'm rather skeptical of the idea.

Comment author: TheOtherDave 27 July 2016 04:58:51PM 0 points [-]

For my part, it's difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.

Comment author: Jiro 26 April 2016 02:42:16PM *  0 points [-]

What if we ignore the VR question? Omega tells you that killing and eating your children will make you maximally happy. Should you do it?

Omega can't tell you that doing X makes you maximally happy unless doing X actually makes you maximally happy. And a scenario where doing X actually makes you maximally happy may be a scenario where you are no longer human and don't have human preferences.

Omega could, of course, also say "you are mistaken when you conclude that being maximally happy in this scenario is not a human preference". However,

  1. This conclusion that that is not a human preference is being made by you, the reader, not just by the person in the scenario. It is not possible to stipulate that you, the reader, are wrong about your analysis of some scenario.
  2. Even within the scenario, if someone is mistaken about something like this, it's a scenario where he can't trust his own reasoning abilities, so there's really nothing he can conclude about anything at all. (What if Omega tells you that you don't understand logic and that every use of logic you think you have done was either wrong or true only by coincidence?)