timtyler comments on Outlawing Anthropics: An Updateless Dilemma - Less Wrong

26 Post author: Eliezer_Yudkowsky 08 September 2009 06:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (194)

You are viewing a single comment's thread. Show more comments above.

Comment author: pengvado 08 September 2009 09:09:06PM 12 points [-]

Well, we don't want to build conscious AIs, so of course we don't want them to use anthropic reasoning.

Why is anthropic reasoning related to consciousness at all? Couldn't any kind of Bayesian reasoning system update on the observation of its own existence (assuming such updates are a good idea in the first place)?

Comment author: timtyler 09 September 2009 09:14:42AM 0 points [-]

Consciousness is really just a name for having a model of yourself which you can reflect on and act on - plus a whole bunch of other confused interpretations which don't really add much.

To do anthropic reasoning you have to have a simple model of yourself which you can reason about.

Machines can do this too, of course, without too much difficulty. That typically makes them conscious, though. Perhaps we can imagine a machine performing anthropic reasoning while dreaming - i.e. when most of its actuators are disabled, and it would not normally be regarded as being conscious. However, then, how would we know about its conclusions?