So8res comments on Failures of an embodied AIXI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
No worries :-)
Zero. To be honest, I don't spend much time thinking about AIXI. My inclination with regards to AIXI is to shrug and say "it's not ideal for all the obvious reasons, and I can't use it to study self-modification", and then move on.
However, it turns out that what I think are the "obvious reasons" aren't so obvious to some. While I'm not personally confident that AIXI can be modified to be useful for studying self-modification, ignoring AIXI entirely isn't the most cunning strategy for forming relationships with other AGI researchers (who are researching different parts of the problem, and for whom AIXI may indeed be quite interesting and relevant).
If anything, my "goal with this inquiry" is to clearly sketch specific problems with AIXI that make it less useful to me and point towards directions where I'd be happy to discuss collaboration with researchers who are interested in AIXI.
It is not the case that I'm working on these problems in my free time: left to my own devices, I just use (or develop) toy models that better capture the part of the problem space I care about.
I really don't want to get dragged into a strategy discussion here. I'll state a few points that I expect we both agree upon, but forgive me if I don't answer further questions in this vein during this discussion.
I don't think discussing how humans deal with this problem is relevant. Are there ways the universe could be that I can't conceive of? Almost certainly. Can I figure out the laws of my universe as well as a perfect Solomonoff inductor? Probably not. Yet it does feel like I could be convinced that the universe is uncomputable, and so Solomonoff induction is probably not an idealization of whatever it is that I'm trying to do.
I don't personally view this as an induction problem, but rather as a priors problem. And though I do indeed think it's a problem, I'll note that this does not imply that the problem captures any significant fraction of my research efforts.