You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Anthropics in a Tegmark Multiverse - Less Wrong Discussion

12 Post author: paulfchristiano 02 April 2011 06:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 03 April 2011 08:06:27PM *  0 points [-]

Then you shouldn't be certain.

I'm certainly not. Like I said, if you have any arguments I expect they could change my opinion.

No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn't be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.

There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible.

I see. I think this whole line of investigation is very confused.

Comment author: paulfchristiano 03 April 2011 08:10:59PM 0 points [-]

No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn't be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.

I don't quite understand. I am not currently certain, in the way I use the term. The way I think about moral question is by imaging some extrapolated version of myself, who has thought for long enough to arrive at stable beliefs. My confidence in a moral assertion is synonymous with my confidence that it is also held by this extrapolated version of myself. Then I am certain of a view precisely when my view is stable.

In what other way can I be certain or uncertain?

Comment author: Vladimir_Nesov 03 April 2011 08:21:53PM *  0 points [-]

You can come to different conclusions depending on future observations, for example, in which case further reflection would not move your level of certainty, the belief would be stable, and yet you'd remain uncertain. For example, consider your belief about the outcome of a future coin toss: this belief is stable under reflection, but doesn't claim certainty.

Generally, there are many ways in which you can (or should) make decisions or come to conclusions, your whole decision problem, all heuristics that make up your mind, can have a hand in deciding how any given detail of your mind should be.

(Also, being certain for the reason that you don't expect to change your mind sounds like a bad idea, this could license arbitrary beliefs, since the future process of potentially changing your mind that you're thinking about could be making the same calculation, locking in into a belief with no justification other than itself. This doesn't obviously work this way only because you retain other, healthy reasons for making conclusions, so this particular wrong ritual washes out.)