Comment author:matheist
12 March 2014 05:53:10AM
4 points
[-]
It's really great to see all of these objections addressed in one place. I would have loved to be able to read something like this right after learning about AIXI for the first time.
I'm convinced by most of the answers to Xia's objections. A quick question:
Yes... but I also think I'm like those other brains. AIXI doesn't. In fact, since the whole agent AIXI isn't in AIXI's hypothesis space — and the whole agent AIXItl isn't in AIXItl's hypothesis space — even if two physically identical AIXI-type agents ran into each other, they could never fully understand each other. And neither one could ever draw direct inferences from its twin's computations to its own computations.
Why couldn't two identical AIXI-type agents recognize one another to some extent? Stick a camera on the agents, put them in front of mirrors and have them wiggle their actuators, make a smiley face light up whenever they get rewarded. Then put them in a room with each other.
Lots of humans believe themselves to be Cartesian, after all, and manage to generalize from others without too much trouble. "Other humans" isn't in a typical human's hypothesis space either — at least not until after a few years of experience.
Comment author:cromulented
12 March 2014 10:05:46PM
*
4 points
[-]
Why couldn't two identical AIXI-type agents recognize one another to some extent? Stick a camera on the agents, put them in front of mirrors and have them wiggle their actuators, make a smiley face light up whenever they get rewarded. Then put them in a room with each other.
If you're suggesting this as a way around AIXI's immortality delusion, I don't think it works. AIXI "A" doesn't learn of death even if it witnesses the destruction of its twin, "B", because the destruction of B does not cause A's input stream to terminate. It's just a new input, no different in kind than any other. If you're considering AIXI(tl) twins instead, there's also the problem that an full model of an AIXI(tl) can't fit into its own hypothesis space, and thus a duplicate can't either.
Lots of humans believe themselves to be Cartesian, after all, and manage to generalize from others without too much trouble. "Other humans" isn't in a typical human's hypothesis space either — at least not until after a few years of experience.
AIXI doesn't just believe it's Cartesian. It's structurally unable to believe otherwise. That may not be true of humans.
It's really great to see all of these objections addressed in one place. I would have loved to be able to read something like this right after learning about AIXI for the first time.
I'm convinced by most of the answers to Xia's objections. A quick question:
Why couldn't two identical AIXI-type agents recognize one another to some extent? Stick a camera on the agents, put them in front of mirrors and have them wiggle their actuators, make a smiley face light up whenever they get rewarded. Then put them in a room with each other.
Lots of humans believe themselves to be Cartesian, after all, and manage to generalize from others without too much trouble. "Other humans" isn't in a typical human's hypothesis space either — at least not until after a few years of experience.
If you're suggesting this as a way around AIXI's immortality delusion, I don't think it works. AIXI "A" doesn't learn of death even if it witnesses the destruction of its twin, "B", because the destruction of B does not cause A's input stream to terminate. It's just a new input, no different in kind than any other. If you're considering AIXI(tl) twins instead, there's also the problem that an full model of an AIXI(tl) can't fit into its own hypothesis space, and thus a duplicate can't either.
AIXI doesn't just believe it's Cartesian. It's structurally unable to believe otherwise. That may not be true of humans.