Sometimes toy models are helpful and some times they are distractions that lead nowhere or embody a mistaken preconception. I see you as claiming these models are distractions, not that no model is possible. Accurate?
I very much favor bottom-up modelling based on real evidence rather than mathematical models that come out looking neat by imposing our preconceptions on the problem a priori.
The classes U and F above, should something like that ever come to pass, need not be AIXI-like (nor need they involve utility functions).
Right. Which is precisely why I don't like when we attempt to do FAI research under the assumption of AIXI-like-ness.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't think it's an active waste of time to explore the research that can be done with things like AIXI models. I do, however, think that, for instance, flaws of AIXI-like models should be taken as flaws of AIXI-like models, rather than generalized to all possible AI designs.
So for example, some people (on this site and elsewhere) have said we shouldn't presume that a real AGI or real FAI will necessarily use VNM utility theory to make decisions. For various reasons, I think that exploring that idea-space is a good idea, in that relaxing the VNM utility and rationality assumptions can both take us closer to how real, actually-existing minds work, and to how we normatively want an artificial agent to behave.
Modulo nitpicking, agreed on both points.