Houshalter comments on The Problem with AIXI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (78)
If human adults didn't grasp death any better than AIXI does, they'd routinely drop anvils on their heads literally, not 'so to speak'.
What do you mean? What would be the alternative to 'symbolic reasoning'?
If a smart AI values things about the world outside its head, it won't deliberately hack itself (e.g., it won't alter its hardware to entertain happy delusions), because it won't expect a policy of self-hacking to make the world actually better. It's the actual world it cares about, not its beliefs about, preferences over, or enjoyable experiences of the world.
The problem with AIXI isn't that it lacks the data or technology needed to self-modify. It's that it has an unrealistic prior. These aren't problems shared by humans. Humans form approximately accurate models of how new drugs, food, injuries, etc. will affect their minds, and respond accordingly. They don't always do so, but AIXI is special because it can never do so, even when given unboundedly great computing power and arbitrarily large supplies of representative data.
AIXI doesn't necessarily drop an anvil on it's head. It just doesn't believe that it's input sequence can ever stop, no matter what happens. This seems to me like what the vast majority of humans believe.
For clarity: are you referring to belief in an afterlife/reincarnation? Or are you saying that most humans are not mindful most of the time of their own mortality?
I am referring to an afterlife of some kind.